-----------------------------------------------------------------------------------
Post ID:7342
Sender:Julian Reschke <julian.reschke@...>
Post Date/Time:2007-01-01 13:33:29
Subject:Re: [rest-discuss] Atom, 'process-this'-POST and rockets
Message:
Elliotte Harold schrieb: > > > ROn 12/2/06, Bill de hOra <bill@dehora. net <mailto:bill%40dehora.net>> > wrote: > > > > > > It often seems to me that POST is a catchall for "do it". I guess the > > > reason I'm pushing on this is to figure out why the 2 verb web is a > > > runaway success. What's driving that? Is it because of the design > > > principles? Or because POST is so decoupled from meaning? Or would > > > things simply be much better even we had a broader vocabulary deployed > > > under the design principles? > > I just did an interview with Bill Venners on exactly this: > > http://www.artima. com/lejava/ articles/ why_put_and_ delete.html > <http://www.artima.com/lejava/articles/why_put_and_delete.html> Commenting over here; I don't want to get an account over there just to add a comment... Anyway: the issue of PUT and DELETE when being repeated potentially causing overlapping updates/deletes with somebody else's changes is solved in HTTP. Just send an "If-Match" request header with the last Etag you got from the server (<http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.14.24>). If you're lucky and the remote server knows how to deal with that properly, you can even avoid to re-send a large entity with PUT by using the "Expect: 100-continue" mechanism (<http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.14.20>). With respect to the original topic: can anybody point out why having *more* verbs would not be REST-ful? For instance, consider PATCH and COPY/MOVE... Best regards, Julian
Hi Juilian, On Jan 1, 2007, at 5:33 AM, Julian Reschke wrote: > Elliotte Harold schrieb: >> >> >> ROn 12/2/06, Bill de hOra <bill@dehora. net <mailto:bill% >> 40dehora.net>> >> wrote: >>>> >>>> It often seems to me that POST is a catchall for "do it". I >>>> guess the >>>> reason I'm pushing on this is to figure out why the 2 verb web is a >>>> runaway success. What's driving that? Is it because of the design >>>> principles? Or because POST is so decoupled from meaning? Or would >>>> things simply be much better even we had a broader vocabulary >>>> deployed >>>> under the design principles? >> >> I just did an interview with Bill Venners on exactly this: >> >> http://www.artima. com/lejava/ articles/ why_put_and_ delete.html >> <http://www.artima.com/lejava/articles/why_put_and_delete.html> > > Commenting over here; I don't want to get an account over there > just to > add a comment... > I don't like creating accounts everywhere either, but requiring accounts for forum postings at Artima allows me among other things to better deal with comment spam. I ask for a minimum amount of info. It's not much more onerous than subscribing to this mailing list was. > Anyway: the issue of PUT and DELETE when being repeated potentially > causing overlapping updates/deletes with somebody else's changes is > solved in HTTP. Just send an "If-Match" request header with the last > Etag you got from the server > (<http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.14.24>). > That's the missing bit of information I've been looking for. That's an optimistic locking approach, which should solve the problem I brought up of overlapping PUTs and DELETEs. Thanks. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com > If you're lucky and the remote server knows how to deal with that > properly, you can even avoid to re-send a large entity with PUT by > using > the "Expect: 100-continue" mechanism > (<http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.14.20>). > > With respect to the original topic: can anybody point out why having > *more* verbs would not be REST-ful? For instance, consider PATCH and > COPY/MOVE... > > Best regards, Julian > > > > Yahoo! Groups Links > > > >
> With respect to the original topic: can anybody point out why having > *more* verbs would not be REST-ful? For instance, consider > PATCH and COPY/MOVE... From the RTF Dissertation: http://www.ics.uci.edu/~fielding/pubs/webarch_icse2000.pdf ===== Connectors REST uses various connector types to encapsulate the activities of accessing resources and transferring resource representations. The connectors present an abstract interface for component communication, enhancing simplicity by providing a clean separation of concerns and hiding the underlying implementation of resources and communication mechanisms. The generality of the interface also enables substitutability: if the users' only access to the system is via an abstract interface, the implementation can be replaced without impacting the users. Since a connector manages network communication for a component, information can be shared across multiple interactions in order to improve efficiency and responsiveness. ====== My view is that the 'generality of the interface' is decreased when additional verbs are used. Another concern is that if there are several different approaches for any given task, an application has to deal with each of the approaches and this can be costly. If a particular verb could be re-phrased in terms of other verbs, then those other verbs are more 'primitive'. With only a few 'primitive' ways of doing many things there is less need to adapt to many different approaches - a cheaper way to conduct business. For example, it's hard to argue that DELETE could be rephrased as GET and vice-versa. It's easier to argue that PATCH could be rephrased as a PUT or POST to a related 'metadata' resource. In addition, with too few operations there is less visibility on the meaning and purpose of an operation which could result in a loss of efficiency or other undesired characteristics. For example, using POST to delete representations rather than DELETE may work, but you lose the predictable repeatability of DELETE (you can resubmit a DELETE in the face of network issues, but it's a bad idea to resubmit a POST unless you have custom/external information indicating that it's okay - in which case, it's just like a DELETE only misspelled).
On Jan 1, 2007, at 12:22 PM, S. Mike Dierken wrote: > > With respect to the original topic: can anybody point out why having > > *more* verbs would not be REST-ful? For instance, consider > > PATCH and COPY/MOVE... > > >From the RTF Dissertation: > http://www.ics.uci.edu/~fielding/pubs/webarch_icse2000.pdf > > ===== > Connectors > REST uses various connector types to encapsulate the > activities of accessing resources and transferring resource > representations. The connectors present an abstract interface > for component communication, enhancing simplicity by > providing a clean separation of concerns and hiding the > underlying implementation of resources and communication > mechanisms. The generality of the interface also enables > substitutability: if the users' only access to the system is via > an abstract interface, the implementation can be replaced > without impacting the users. Since a connector manages > network communication for a component, information can > be shared across multiple interactions in order to improve > efficiency and responsiveness. > ====== > > My view is that the 'generality of the interface' is decreased when > additional verbs are used. I disagree. The generality refers to all resources having the same interface, not all resources having an artificially limited interface. It isn't even necessary for all resources to support the same set of methods -- only that, when supported, they mean the same thing to all resources. PATCH was in my original HTTP/1.1 proposal and, assuming it is implemented the way I described, is just as RESTful as PUT. The limiting factor of only a few methods is a side-effect of the architectural constraint. Given that all methods have to mean the same thing to all resources, there are a very limited number of semantics that can usefully fit within a method. > Another concern is that if there are several different approaches > for any > given task, an application has to deal with each of the approaches > and this > can be costly. If a particular verb could be re-phrased in terms of > other > verbs, then those other verbs are more 'primitive'. With only a few > 'primitive' ways of doing many things there is less need to adapt > to many > different approaches - a cheaper way to conduct business. > For example, it's hard to argue that DELETE could be rephrased as > GET and > vice-versa. It's easier to argue that PATCH could be rephrased as a > PUT or > POST to a related 'metadata' resource. So can DELETE. PATCH has very specific semantics and a very specific goal of reducing bits on updates. It is a separate method because it needs access to the same (generic) conditional mechanisms as PUT and because POST (when applied to an authorable resource) means append. MOVE and COPY are namespace operations. The problem with the WebDAV definitions are that they target the wrong resource and then stick another URI in an arbitrary header field. That is due to trying to squeeze a multitarget operation into a protocol (HTTP) that simply wasn't designed for multiple targets. The target should be the collection that is being changed. This is an architecture detail, not a question of architectural style. REPORT, on the other hand, is just plain evil. It is bad architecture, violates REST, avoids giving a URI to important resources, and tunnels arbitrary methods through HTTP. ....Roy
On Mon, 2007-01-01 at 14:33 +0100, Julian Reschke wrote: > With respect to the original topic: can anybody point out why having > *more* verbs would not be REST-ful? For instance, consider PATCH and > COPY/MOVE... REST does not constrain the available methods to GET, PUT, POST, DELETE. It does, however, constrain the available methods to be some universally-understood set. In REST this set is meant to evolve over time as new demands are placed on the architecture, just as new content types are added as new demands emerge. New methods are supposed to be added over time. Unused methods eventually should be deprecated. The fact that we have gotten away with so few methods for so long on the web suggests that the web is a fairly mature architecture. It is probably also influenced by the amount of tunnelling (or at least "process this") that tends to go over the POST method. Benjamin
Roy T. Fielding schrieb: > ... > REPORT, on the other hand, is just plain evil. It is bad architecture, > violates REST, avoids giving a URI to important resources, and tunnels > arbitrary methods through HTTP. > ... Sorry? REPORT be definition isn't as bad as POST, being a safe method. Thus, it can *not* tunnel arbitrary methods, at least as long as the people using it read the method definition (<http://greenbytes.de/tech/webdav/rfc3253.html#rfc.section.3.6>). Best regards, Julian
S. Mike Dierken wrote: > For example, it's hard to argue that DELETE could be rephrased as GET and > vice-versa. It's easier to argue that PATCH could be rephrased as a PUT or > POST to a related 'metadata' resource. I suspect PATCH is most effectively rephrased as it is now: a PUT of the entire resource rather than a diff file. For most use cases there just isn't enough benefit to sending only the diffs to justify a new verb. That's more true as time passes and bandwidth expands. Plus it handily avoids the question of defining the diff format and patch algorithm. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
> > For example, it's hard to argue that DELETE could be rephrased as GET > > and vice-versa. It's easier to argue that PATCH could be rephrased as > > a PUT or POST to a related 'metadata' resource. > > So can DELETE. PATCH has very specific semantics and a very > specific goal of reducing bits on updates. It is a separate > method because it needs access to the same (generic) > conditional mechanisms as PUT and because POST (when applied > to an authorable resource) means append. Sorry - I used PATCH in that example without knowing the actual semantics. I think I had it confused with PROPPATCH. Or I was simply confused. I've been looking for a 'partial update' operation, so I'll take a look at PATCH - is this the authoritative description: http://www.w3.org/Protocols/HTTP/1.1/spec.html#PATCH
Elliotte Harold wrote: > > > Paul Downey wrote: > > > 3) as essential as PUT and DELETE are, they're blocked by many > > firewall/proxies and it's a brave API developer who > > depends upon them .. > > Yes, that's a problem, though I really don't think that tunneling PUT > and DELETE through GET and POST is the solution. I tend to believe that > firewall preferences should be respected. If someone has chosen to block > the functionality of PUT and DELETE, then that's there decision and we > should respect it. PUT maybe I can live without; tunneling DELETE over POST is nauseating. But this thing about firewalls is a tad backways. Firewall admins don't really want to block PUT and DELETE (that's the DBA's job). I think the admins also learned from subsetted examples. cheers Bill
Elliotte Harold wrote: > Is there anyway to ask a server whether it considers two URLs to be the > same? Should there be? Not in HTTP. But you can express "Hesperus is Phosphorus" class equivalence with OWL using sameAs and you can expose a query service that can answer yes or no for sameAs using SPARQL. cheers Bill
Elliotte Harold wrote: > > > Nic James Ferrier wrote: > > > I think we probably shouldn't respect it because it's probably not > > been done for the right reasons. > > And you know that how? It is not your place to say that I have > configured my firewall improperly. 'Cos if you had your firewall configured properly, you'd block POST first. cheers Bill
Mike Schinkel wrote: > > Ah yes, the ugly '@' prefix. It's to avoid naming clashes between > > built-in and custom URIs. > > I definitely wouldn't use something that requires encoding. It doesn't. According to [1] (section 3.3), the '@' character is legal in the URI path and does not require escaping. Cheers, - Steve [1] http://www.ietf.org/rfc/rfc2396.txt -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Hugh Winkler wrote: > > > > In other words, I can see how the form can be generated from the > > schema (modulo input field descriptions), but I'm not sure how to > > infer the complete schema from a form. Are there examples of this? > > > > Unsure I'm understanding you: Why would you want to infer a complete > schema from a form? To do input validation. Let me explain: I'm currently pondering how to best deal with resources that are pure behaviors (i.e. only accept POST) in Dream, our REST framework. If I associate a RelaxNG schema with the resource, I can use it to validate input before it reaches the implementation; and I _could_ use it to generate a form. The form would be returned on a GET request, making the resource reflective (or self-descriptive). Now, developers could use the form to interact with the resource directly, which has a lot pedagogical value. However, if I take the opposite approach, meaning that if a GET were to return a form, I would not get the automatic input validation benefit unless I were able to generate the schema from the form. Thus, my current inclination is to use a RelaxNG schema to both do input validation and interact with the resource. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
I've created a short tutorial on REST and some common resource/service patterns. The tutorial is based on my experience with designing the API for our DekiWiki application [1]. I would welcome feedback on the accuracy of the content. The tutorial will be used to introduce developers to REST and establish a common framework for designing services. http://doc.opengarden.org/Articles/REST_for_the_Rest_of_Us Thanks in advance for taking a look at it. Cheers, - Steve [1] http://doc.opengarden.org/DekiWiki_API/Reference/DekiWiki -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Steve G. Bjorg wrote: > Mike Schinkel wrote: > > > Ah yes, the ugly '@' prefix. It's to avoid naming clashes between > > > built-in and custom URIs. > > > > I definitely wouldn't use something that requires encoding. > > It doesn't. According to [1] (section 3.3), the '@' character is > legal in the URI path and does not require escaping. > [1] http://www.ietf.org/rfc/rfc2396.txt I didn't check rfc 2396, but rfc 3986 obsoletes it. Reading from "Reserved Characters" section[1] it appears that you should percent encode the '@' character, but I could be misreading. Can anyone on the list with more experience tell me if I'm interpreting it incorrectly, and if so why? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
I've been reflecting a little on this issue with versioned resources and the fact that a straightforward PUT would not be idempotent (or at least not exactly), but a PUT-302-PUT combination involves sending the content payload twice (a nasty shock for the author wanting to upload her 1GB video file). Is it possible to use the Expect mechanism to avoid having to send the payload for the first PUT? If so, what would the dialogue look like, would it involve a 302 or a 417 response, and can you even have a Location header with a 417 response? -- Chris Burdess
Chris Burdess schrieb: > > > I've been reflecting a little on this issue with versioned resources and > the fact that a straightforward PUT would not be idempotent (or at least > not exactly), but a PUT-302-PUT combination involves sending the content > payload twice (a nasty shock for the author wanting to upload her 1GB > video file). When using DeltaV (RFC3744), a PUT will always be idempotent unless you enable auto-versioning. > Is it possible to use the Expect mechanism to avoid having to send the > payload for the first PUT? If so, what would the dialogue look like, > would it involve a 302 or a 417 response, and can you even have a > Location header with a 417 response? I would think so. The server would immediately send the 302 with the Location header. One related problem is that the Java servlet API doesn't really allow servlet based servers to do the right thing here. It would be really nice if there'd be some progress on that. (<http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4812000>). Best regards, Julian
On Tue, 2007-01-02 at 11:57 +0000, Chris Burdess wrote: > I've been reflecting a little on this issue with versioned resources > and > the fact that a straightforward PUT would not be idempotent (or at > least > not exactly), but a PUT-302-PUT combination involves sending the > content > payload twice (a nasty shock for the author wanting to upload her 1GB > video file). I'm not familiar with with the original discussion, but why should a 302 be necessary? If I PUT several times to <http://example.com/mydocument>, is it important to me that the <http://example.com/mydocument;1>, <http://example.com/mydocument;2>, and <http://example.com/mydocument;3> resources are created? My operation has succeeded. Whatever else the server chooses to do with my submission is up to it. I don't see any need to redirect a client to a specific new document version in order to allow their PUT operation to proceed. Benjamin.
On Tue, 2007-01-02 at 06:57 -0500, Mike Schinkel wrote: > Steve G. Bjorg wrote: > > Mike Schinkel wrote: > > > > Ah yes, the ugly '@' prefix. It's to avoid naming clashes > between > > > > built-in and custom URIs. > > > I definitely wouldn't use something that requires encoding. > > It doesn't. According to [1] (section 3.3), the '@' character is > > legal in the URI path and does not require escaping. > > [1] http://www.ietf.org/rfc/rfc2396.txt > I didn't check rfc 2396, but rfc 3986 obsoletes it. Reading from > "Reserved > Characters" section[1] it appears that you should percent encode the > '@' > character, but I could be misreading. Can anyone on the list with more > experience tell me if I'm interpreting it incorrectly, and if so why? "@" is legal in most parts of a uri, since it is part of the pchar set. As for the RESTfulness or not of dream, I suppose I have some comments. I am reading <http://doc.opengarden.org/Articles/REST_for_the_Rest_of_Us>. My initial reactions: * In REST it is likely that multiple resources will be served from a single defined set of state, ie a service. For example an object or a set of objects are likely to have several resources associated with them that act as the network interface to these objects. I would suggest that instead of "Each Dream service (being) a resource", each dream service should make a set of resources available. * Returning a dream blueprint to users of the system doesn't read to me as useful. If the blueprint exposure is an internal deployment mechanism, this is ok. The blueprint appears to be an internal document that allows particular methods on particular resources to be mapped to other method invocations on internally-defined objects. This is fine, but should not normally be communicated to users of the interface. Users should see published lists of URIs that meet particular requirements the uses may have. Perhaps a cut-down form of the blueprint would be more appropriate than one that exposes internal classes and the like. * It looks like access to HTML headers and the like may be limited. Where is the url returned in a POST's Location header in <http://doc.opengarden.org/Dream_SDK/Tutorials/Address_Book>? On the whole the framework doesn't look particularly RESTful or non-RESTful. It would be up to a particular blueprint and implementation to conform or not conform to REST principles. If it is designed for interaction with HTTP and allows appropriate access to HTTP headers I don't see any obvious problems on the surface. Benjamin
On Jan 2, 2007, at 3:19 PM, Benjamin Carlyle wrote: > On Tue, 2007-01-02 at 11:57 +0000, Chris Burdess wrote: >> I've been reflecting a little on this issue with versioned resources >> and >> the fact that a straightforward PUT would not be idempotent (or at >> least >> not exactly), but a PUT-302-PUT combination involves sending the >> content >> payload twice (a nasty shock for the author wanting to upload her 1GB >> video file). > > I'm not familiar with with the original discussion, but why should > a 302 > be necessary? Because the resource the client intended to update remains unapdated, thus the expectation implied by PUT fails and therfore a 2xx is inappropriate. > > If I PUT several times to <http://example.com/mydocument>, is it > important to me that the <http://example.com/mydocument;1>, > <http://example.com/mydocument;2>, and <http://example.com/ > mydocument;3> > resources are created? My operation has succeeded. No, because http://example.com/mydocument is not being changed. Jan > Whatever else the > server chooses to do with my submission is up to it. I don't see any > need to redirect a client to a specific new document version in > order to > allow their PUT operation to proceed. > > Benjamin. > > > > > > Yahoo! Groups Links > > >
Jan Algermissen schrieb: > > > > On Jan 2, 2007, at 3:19 PM, Benjamin Carlyle wrote: > > > On Tue, 2007-01-02 at 11:57 +0000, Chris Burdess wrote: > >> I've been reflecting a little on this issue with versioned resources > >> and > >> the fact that a straightforward PUT would not be idempotent (or at > >> least > >> not exactly), but a PUT-302-PUT combination involves sending the > >> content > >> payload twice (a nasty shock for the author wanting to upload her 1GB > >> video file). > > > > I'm not familiar with with the original discussion, but why should > > a 302 > > be necessary? > > Because the resource the client intended to update remains unapdated, > thus the expectation implied by PUT fails and therfore a 2xx is > inappropriate. > > > > If I PUT several times to <http://example. com/mydocument > <http://example.com/mydocument>>, is it > > important to me that the <http://example. com/mydocument; 1 > <http://example.com/mydocument;1>>, > > <http://example. com/mydocument; 2 > <http://example.com/mydocument;2>>, and <http://example. com/ > <http://example.com/> > > mydocument;3> > > resources are created? My operation has succeeded. > > No, because http://example. com/mydocument > <http://example.com/mydocument> is not being changed. Well, all the problems go away if the resource *is* changed. I have the feeling of "not-invented-here". Can anybody please explain why it's so bad to just do what RFC3744 describes as "checkout-in-place" feature? Best regards, Julian
On 1/2/07, Julian Reschke <julian.reschke@...> wrote: > [snip] Can anybody please explain > why it's so bad to just do what RFC3744 describes as "checkout-in-place" > feature? Because it violates idempotency, see http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.2 'Silently' creating a new copy (on a public URI) as you suggest has a side-effect (namely an increment of the publicly-visible version number). However, if the request returns a redirect to the next version number then the caller can elect not to follow the redirect if they choose (or PUT repeatedly and idempotently onto the 'versioned' URI) Of course, if the versioning is entirely private and not accessible to callers, you can do whatever you like. Idempotency only refers to what callers can see evidence of. Hope that helps, Alan Dean
On Jan 2, 2007, at 4:00 PM, Julian Reschke wrote: > > > Well, all the problems go away if the resource *is* changed. The issue IMO is that the client needs no knowledge whatsoever about versioning, it just understands PUT and the server uses plain HTTP facilities (redirect) to guide the client. The client will work equally with a non-versioning and a versioning server. In fact, the client hardly cares if there are versions or not, it just wants the server to 'store' what it sends. Jan > > I have the feeling of "not-invented-here". Can anybody please > explain why it's so bad to just do what RFC3744 describes as > "checkout-in-place" feature? > > Best regards, Julian > >
Jan Algermissen schrieb: > > > > On Jan 2, 2007, at 4:00 PM, Julian Reschke wrote: > > > > > > > Well, all the problems go away if the resource *is* changed. > > The issue IMO is that the client needs no knowledge whatsoever about > versioning, it just understands PUT and the server uses plain HTTP > facilities (redirect) to guide the client. The client will work > equally with a non-versioning and a versioning server. In fact, the > client hardly cares if there are versions or not, it just wants the > server to 'store' what it sends. > > Jan Can you please explain how that would work with existing clients that follow HTTP/1.1 with respect to not following redirects for unsafe methods? (pointer?) Best regards, Julian
Benjamin Carlyle wrote: > > I've been reflecting a little on this issue with versioned resources > > and > > the fact that a straightforward PUT would not be idempotent (or at > > least > > not exactly), but a PUT-302-PUT combination involves sending the > > content > > payload twice (a nasty shock for the author wanting to upload her 1GB > > video file). > > I'm not familiar with with the original discussion, but why should a 302 > be necessary? See the thread http://tech.groups.yahoo.com/group/rest-discuss/message/7254 > If I PUT several times to <http://example.com/mydocument>, is it > important to me that the <http://example.com/mydocument;1>, > <http://example.com/mydocument;2>, and <http://example.com/mydocument;3> > resources are created? My operation has succeeded. Whatever else the > server chooses to do with my submission is up to it. I don't see any > need to redirect a client to a specific new document version in order to > allow their PUT operation to proceed. It does seem to be a bit of a bone of contention. If we assume that /mydocument *does* change, i.e. that it is equivalent to /mydocument;current or maybe /mydocument?revision=current , and always reflects the state of the last change, then the PUT to /mydocument is only idempotent with respect to /mydocument and not to the entire namespace. This is a problem since RFC 2616 defines an idempotent method in terms of its side-effects not its direct effects, if you see what I mean. -- Chris Burdess
On Jan 2, 2007, at 4:15 PM, Chris Burdess wrote: > If we assume that > /mydocument *does* change, i.e. that it is equivalent to > /mydocument;current or maybe /mydocument?revision=current , and always > reflects the state of the last change The problem with this is that it only works with linear versioning; branching is not possible. Jan
[ Attachment content not displayed ]
On Jan 2, 2007, at 4:46 PM, Julian Reschke wrote: > > Can you please explain how that would work with existing clients that > follow HTTP/1.1 with respect to not following redirects for unsafe > methods? (pointer?) Are you refering to section 10.3? I am not sure how much this constraint is targeted towards software user agents (as it specificaly mentiones the user). IMHO, it is a configuration issue - if the user instructs the user agent to follow the redirects by default, that would not contradict the specs (IMHO). I read it to mean that the user agent implementation should nt silently follow the redirect but provide a means to detect it (if desired). Could be wrong though - is it? Jan > > Best regards, Julian > > > > > Yahoo! Groups Links > > >
Jan Algermissen schrieb: > > On Jan 2, 2007, at 4:00 PM, Julian Reschke wrote: > >> >> >> Well, all the problems go away if the resource *is* changed. > > The issue IMO is that the client needs no knowledge whatsoever about > versioning, it just understands PUT and the server uses plain HTTP > facilities (redirect) to guide the client. The client will work equally > with a non-versioning and a versioning server. In fact, the client > hardly cares if there are versions or not, it just wants the server to > 'store' what it sends. Well, I personally think that a redirect here is the wrong approach. If only if user agents only may follow a redirect without user interaction if the method is safe, which PUT is not (see <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.10.3>). Best regards, Julian
Benjamin Carlyle wrote: > "@" is legal in most parts of a uri, since it is part of the > pchar set. Thanks. Reading those specs correctly is so tricky... > Perhaps a cut-down > form of the blueprint would be more appropriate than one that > exposes internal classes and the like. That's what I was enivisoning since they are so closely related. > On the whole the framework doesn't look particularly RESTful > or non-RESTful. It would be up to a particular blueprint and > implementation to conform or not conform to REST principles. > If it is designed for interaction with HTTP and allows > appropriate access to HTTP headers I don't see any obvious > problems on the surface. I'm curious; could RESTfulness be more strongly enforced by the framework? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Alan Dean schrieb: > > > On 1/2/07, Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> wrote: > > [snip] Can anybody please explain > > why it's so bad to just do what RFC3744 describes as "checkout-in- place" > > feature? > > Because it violates idempotency, see > http://www.w3. org/Protocols/ rfc2616/rfc2616- sec9.html# sec9.1.2 > <http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.2> > > 'Silently' creating a new copy (on a public URI) as you suggest has a > side-effect (namely an increment of the publicly-visible version > number). But that is only the case when you have a server with autoversioning. PUT on a RFC3744-versioncontrolled resource is idempotent unless you do autoversioning. > However, if the request returns a redirect to the next version number > then the caller can elect not to follow the redirect if they choose > (or PUT repeatedly and idempotently onto the 'versioned' URI) If a PUT request returns a 3xx, a sane client will assume that the PUT has not been executed at all. > Of course, if the versioning is entirely private and not accessible to > callers, you can do whatever you like. Idempotency only refers to what > callers can see evidence of. > > Hope that helps, > Alan Dean Best regards, Julian
Stelios Eliakis write: >> I don't know how to handle opaque URIs >> in Apache. I want /myserver/products/1/ >> to run an index.php and /myserver/products/ >> to run the same index.php etc. Do you know >> how can I do it without copy this file in every >> directory? Can't you just use mod_rewrite for that? (Are you familiar with using mod_rewrite?) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Walden Mathews wrote:
> Maintaining multiple URIs for the same resource is not
> best practice. For the same reason you want uniformity in
> methods, you want one preferred URI for identifying a
> resource. The server should redirect requests for other
> equivalent URIs to the preferred one. Althought that's not
> quite what you are asking, I think it is the answer.
I'm curious why you make the above statement. Is there a W3C finding that documents this, or some other paper? I've been doing a significant amount of research on URLs and URI best practices in the past 3+ months and have found nothing like this (though I could have missed it.)
To me, is appears on the surface to be a sound engineering principle, but in practice an unreasonable constraint. Given my research I've come to the conclusion there is a need for additional metadata standards that would allow better determination of URL equivalence when the expense of retrieving the resource is not prohibitive.
=============
Consider a blog with the following URL structure:
http://www.myblog.com/{author}/{year}/{month}/{day}/{post-name}/
Not let's assume those are hackable URLs, which mean that all of the following will provide appropriate lists of posts with each one having a breadcrumb path at the top:
http://www.myblog.com/{author}/{year}/{month}/{day}/
Home > {author} > {year} > {month}
http://www.myblog.com/{author}/{year}/{month}/
Home > {author} > {year}
http://www.myblog.com/{author}/{year}/
Home > {author}
http://www.myblog.com/{author}/
Home
Now given that, I might also want to have the following, each with it's own appropriate breadcrumbs
http://www.myblog.com/{year}/
Home
http://www.myblog.com/{year}/{month}/
Home > {year}
http://www.myblog.com/{year}/{month}/{author}/
Home > {year} > {month}
http://www.myblog.com/{year}/{author}/{post-name}/
Home > {year} > {month} > {author}
(We are ignoring the problem of duplicates on a day or for an author within a year. I have strategies for handling but would make my reply much longer and isn't important for this discussion of this issue.)
So a user decides to drill down the year/month/author path and then selects a post. They read the post and then look to click the breadcrumb to go back to author's page.
But wait, the year/month/author breadcrumb isn't there because the server redirected them to the author/year/month version of the URL. The user is rather confused, and frankly a but pissed off because they had seen an article they wanted to read but were not sure where it was now. So they leave the website in a huff vowing never to return.
=============
Consider another set of URLS:
http://www.mycarsite.com/{make}/{model}/{year}/
But it's also logical to have:
http://www.mycarsite.com/{year}/{make}/{model}/
Or even:
http://www.mycarsite.com/{make}/{year}/{model}/
=============
And those are but two examples, I have (literally) tens if not hundreds more examples I could provide. I can't imagine it would be good human interface to bounce users around like you are suggesting.
Basically we are talking about information that can be viewed as a heirarchy but for which the heirarchy is mutable. Essentially we are talking about Matrix URIs [1] where the only difference is slashes instead of semi-colons (the semi-colon syntax never caught on and at this point I would guess both web developers and users are more comfortable understanding slashes in their URLs compared with semi-colons.)
So wouldn't Matrix URIs violate the principles you mention above?
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org/
[1] http://www.w3.org/DesignIssues/MatrixURIs.html
So it seems to me there is a
Walden
----- Original Message -----
From: "Elliotte Harold" <elharo@... <mailto:elharo%40metalab.unc.edu> >
To: "REST Discuss" <rest-discuss@yahoogroups.com <mailto:rest-discuss%40yahoogroups.com> >
Sent: Sunday, December 31, 2006 8:24 AM
Subject: [rest-discuss] Determining the equality of two URLs
: How would one determine the deep equality of two URLs? That is, it is
: possible for two URLs to identify the same resource, but how can one
: know this? (aside from trivial cases like http://www.oreilly.com <http://www.oreilly.com> and
: http://www.oreilly.com:80/ <http://www.oreilly.com:80/> )
:
: Is there anyway to ask a server whether it considers two URLs to be the
: same? Should there be?
:
: You could certainly implement this as a GET to a special URL on the
: server, but I'm beginning to wonder if this is a special case like HEAD
: where an additional verb might actually make sense.
:
: --
: Elliotte Rusty Harold elharo@... <mailto:elharo%40metalab.unc.edu>
: Java I/O 2nd Edition Just Published!
: http://www.cafeaulait.org/books/javaio2/ <http://www.cafeaulait.org/books/javaio2/>
: http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ <http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/>
:
:
: __________ NOD32 1949 (20061230) Information __________
:
: This message was checked by NOD32 antivirus system.
: http://www.eset.com <http://www.eset.com>
:
:
Elliotte Harold wrote: >Nic James Ferrier wrote: > >>What reason can there be for preventing ones users from sending PUT or >>DELETE? >> >>I can't see one. >> > >Irrelevant. If the firewall admins see one, that's good enough. It's >their firewall, not yours. If the internal users are inconvenienced then >they are free to lobby to change their organization's firewall policy, >through whatever means are available within their organization. No, that's not reality. The internal admins configuring the firewall in many places are clueless (believe me, I ran a business for a while where that was truly the case but I couldn't afford to hire anyone better. And it will be the case in many small to medium size businesses.) The users will not know that the website is respecting their firewall "as it should", all they will know is that the service doesn't work for them and they won't have a clue that it has anything to do with the firewall (even if the website tells them.) All they will know is they they are going to give their business to that other website that does what it needs to do in order to work, as Nic suggests. >Indeed if the firewall policies are really brain-damaged then the most >effective way to convince people of this is to let things break rather >than working around their brain damage. This is unfortunately the reality of the web where "should" and "does" are often mismatched. Spend some time on the WHATWG list debating Ian Hickson if you don't believe me. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
>> REST-AHAH cuts the methods down to GET and POST (and OPTIONS and HEAD too I guess). Just for my edification, what does "AHAH" stand for? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
[ Attachment content not displayed ]
> Now I have another problem,
> I don't know how to handle opaque URIs in Apache. I want /myserver/products/1/ to run an index.php and /myserver/products/ to run the same index.php etc.
> Do you know how can I do it without copy this file in every directory?
> I know that it is not an Apache mailing list but I believe that most of you have solve this problem and probably can help me :)
Just drop these lines in an .htaccess file at at, for example, /myserver/.
RewriteEngine On
RewriteBase /base/url/for/rewriting
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*) /path/to/file.php [L]
Chase Urich
(Sorry for the double Stelios, forgot to adjust the reply to go to the group)
On 1/2/07, Mike Schinkel <mikeschinkel@...> wrote: > [snip] > Just for my edification, what does "AHAH" stand for? See my bookmarks at http://del.icio.us/alan.dean/ahah
On 1/2/07, Stelios Eliakis <eliakis@...> wrote: > I have never heard mod_rewrite :) > Is something trivial? Difficult? Can you give me any tip? http://www.google.com/search?q=mod_rewrite
>> Stelios Eliakis wrote: >> I have never heard mod_rewrite :) >> Is something trivial? Difficult? Can you give me any tip? Not hard, if you know regular expressions. If not, beware. ;-) Well, I wrote an writeup about clean URLs for Mediawiki using mod_rewrite[1]. But just Google it[2]; you should find far more than enough to go on. If not, just ask. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ [1] http://wiki.welldesignedurls.org/Clean_Urls_for_MediaWiki [2] http://www.google.com/search?q=mod_rewrite
Alan Dean wrote: > Mike Schinkel wrote: >> Just for my edification, what does "AHAH" stand for? > See my bookmarks at http://del.icio.us/alan.dean/ahah <http://del.icio.us/alan.dean/ahah> Thanks. Jeesh, another acronym/meme! And to this I thought that AHA was just called POX! [1] -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ [1] Plain Old XML
On 12/31/06, Benjamin Carlyle <benjamincarlyle@...> wrote: > On Sat, 2006-12-30 at 19:08 -0600, Hugh Winkler wrote: > > On 12/30/06, Benjamin Carlyle <benjamincarlyle@...> wrote: > > > I'm not sure who you think will interpret this form on the client > > side. > > > You see, the thing is that the atom document format is already a > > form. A > > > client has already received the form by being programmed in a > > particular > > > way, and is now submitting the form. It knows what information > > should be > > > placed in each of the named fields. It knows how to construct the > > end > > > document. > > Ah, but it is a standard form for all servers -- not for my particular > > server app. And to know what kind of form to submit, you read a spec > > -- you did not GET the description from the server. It's all baked in > > at design time. > > Exactly. It has been agreed ahead of time. That's the problem --- "ahead of time" rather than dynamically at run time. >Now, I can just send my POST > request and know that the server understands a useful subset of what I > am sending them. Not sure how you would know it understands a useful subset. You won't have the slightest idea what parts of your document it understands, or doesn't. doesn't handle multiple authors? Can only accept text/plain for <title>? > > What you are suggesting is that I first need to obtain a schema document > (which you are calling a form) A schema document is not a form. Maybe we should stay away from XML for a moment. Think HTML form... which is a little "program" telling the UA how to serialize a submission. I didn't mention RDF forms[1]... but I should have. > to see if the server is actually > understanding only a subset of the atom vocabulary. Not a subset... could be a superset. > Then I need to > customize my content to conform to this subset. As a machine, I don't > have any good way of doing that. See, you have this problem anyway. If you are sending a server stuff it doesn't understand, or not enough stuff, your application will fail. With forms, at least, your application knows "Hey, I don't know how to fill out this required field". Same as a human would using an HTML form. Your client can report that to a human for correction. Atom (and any application protocol based on exchanging known document types) has to trade off between exhaustively specifying application behavior and exhaustively specifying failure handling. >If I come up to a server that has > support for an unexpectedly small subset of atom, I then have to > customize my content in an unexpected way. It is better for you to do the customizing. TTake the example of a server that simply cannot honor text/html or application/xhtml+xml in the title field. It can only handle text/plain. Atom protocol says nothing at the moment about this situation, except that the server can change your POSted data as it needs to. So presently my server either a) rejects your submission or b) stores it as text/plain. Better would be for your client to receive an Xform with a constraint specifiying "text/plain" only -- then, if the user had any imprtant rich content they wanted to put in the title, they can at least try to compensate. >That is to say, I way that > noone programmed me to customise my content in :) > Well, you would have programmed it from this pov, so you would have handled these exceptional situations. > The server, on the other hand, is in a good position to customise the > content. It knows which subset of atom it understands, and it knows what > atom is generally. It knows multiple authors might be required, so is in > a position to either model those multiple authors or use an algorithm to > select an author for its model from the available list. See above. Yes, you are describing the undesirable behavior the current APP forces you into. > > > What I proposed is that the form delivered by a server have just the > > elements that make sense for the server. My server might not know what > > to do with a <source> element. Using the standard "form", my server > > has to have handling in place if you submit a <source> element, and it > > has to describe to you the problem if you do submit <source> and my > > server rejects it. > > All atom elements make sense to the server, even if they don't fit the > server's internal model. > The server implements atom, after all. not so -- the client may submit extgension elements the server isn't aware of. > What you > are suggesting is adding an extra message exchange to move the > complexity of fitting atom into the server's model back to the client. Yes. > Neither side is going to be great at solving a model mismatch problem, > but the client is likely to be downright incapable. As above: The client is in the best position to take corrective action, so as to best fulfill its intent. >If you really want > to be able to sensibly deal with this class of server you need to encode > how that class of server behaves into the atom specifications (as you > suggested was already being done). Then the server will use a > generally-accepted algorithm to select which author it will use from the > ones available, and the client will know to put the most important > author first. > Yes... app is prescribing server behavior, not message semantics. > > > One might argue that the client and server should communicate online > > > before the client submits data that the server might not accept. In > > one > > > of the examples you mentioned earlier the server might not support > > the > > > full atom protocol, in that multiple authors may not be supported. > > To be clear -- whether a server has a model supporting multiple > > authors is a modeling, not a message exchange protocol, issue. > > The server's model doesn't allow for a full fidelity translation of the > atom protocol. The model doesn't allow the full protocol to be > understood. > > > > As > > > such, the server might be able to publish a document that indicated > > a > > > subset of the atom format that is usable. > > >I suggest that while a > > > document format could be produced to communicate which subset is > > > important, the cost would likely outweigh the value. Certainly, > > allowing > > > for additional elements in the form would not be meaningful to a > > client > > > without further specificaiton as to what those elements mean. Any > > > additions of this kind would likely require code changes to the > > client. > > > > Well I was focusing on what clients POST to servers, but it seemed > > like you began that para talking about what servers publish? > > > > In the case of servers describing what clients can send to them, that > > would just be the Xform, so I'm not clear how the cost outweighs the > > value -- it seems simple enough in this case. > > Hrrm... why not xml schema? Why not relax ng? Why not another format? > Schema != Form, but yeah, another kind of form ... RDF form... > Naming a particular specification doesn't really help. A standardisation > process still has to take place to achieve widespread acceptance. > It's already in place. Programs understand the meanings of Atom XML elements. So when presented with an Xform model specifying those elements, the client app kknows how to populate those fields. > Your server can still offer an xform or a HTML form to allow new atom > entries to be created by humans. That needs no further standardisation > to occur. However, machine to machine communications is best and > simplest when the specification can be treated as the form that needs to > be filled out and understood. Well heck... if you believe that, then RPC is a good approach... it's all spelled out for the client programmers at design time. >Additional negotiation between client and > server reduces the value of the standard and increases complexity > everwhere. > Forms made the web adaptive. Go to any airline reservation web site. They're all the same, but different too. They mostly do the same things, but Orbitz offers packages with hotels and cars, while Delta offers connections with partner airlines, and Priceline won't let you see the itinerary. There's some shared vocabulary among all those sites but varying behavior. If the airlines standardize their vocabulary for forms, you could program a client to interact, adaptively, with all those sites. But you could not constrain Priceline's app to squeeze into the same behavior model as Delta's. And you shouldn't -- you should encourage diversity among web apps, as has been successful on the web to date. Hugh [1] http://www.markbaker.ca/2003/05/RDF-Forms/
On Jan 2, 2007, at 3:57 AM, Mike Schinkel wrote: > Steve G. Bjorg wrote: > > Mike Schinkel wrote: > > > > Ah yes, the ugly '@' prefix. It's to avoid naming clashes > between > > > > built-in and custom URIs. > > > > > > I definitely wouldn't use something that requires encoding. > > > > It doesn't. According to [1] (section 3.3), the '@' character is > > legal in the URI path and does not require escaping. > > [1] http://www.ietf.org/rfc/rfc2396.txt > > I didn't check rfc 2396, but rfc 3986 obsoletes it. Reading from > "Reserved > Characters" section[1] it appears that you should percent encode > the '@' > character, but I could be misreading. Can anyone on the list with more > experience tell me if I'm interpreting it incorrectly, and if so why? A URI generator should percent-encode reserved characters when they are not being used as delimiters, so that other layers that do use them as delimiters can treat them specially without misinterpreting normal data (e.g., a name that just happens to start with @ would be encoded, whereas the delimiter used by Dream would not). In other words, Dream is using a reserved character correctly. Whether or not @ is the best choice for that reserved character is a long story and probably depends on what URIs are allowed (some use @ in the path for other things) and whether or not it interfaces with XPath processing (which uses @ for attribute names). ....Roy
On Jan 2, 2007, at 9:19 AM, Mike Schinkel wrote: > Walden Mathews wrote: > > Maintaining multiple URIs for the same resource is not > > best practice. For the same reason you want uniformity in > > methods, you want one preferred URI for identifying a > > resource. The server should redirect requests for other > > equivalent URIs to the preferred one. Althought that's not > > quite what you are asking, I think it is the answer. > > I'm curious why you make the above statement. Is there a W3C > finding that documents this, or some other paper? I've been doing a > significant amount of research on URLs and URI best practices in > the past 3+ months and have found nothing like this (though I could > have missed it.) It is inherent in the power laws of economics, the network effect of Metcalfe, the PageRank of Google, and I am sure it is mentioned somewhere in webarch. A gave a presentation to the W3C Tech Plenary that described it using a single graph that is bisected when other sites start referring to two separate URIs for the same resource. When the best resources on the Web are identified by the sites that link to them, introducing multiple URIs for the same resource has the effect of exponentially decreasing its perceived value. It also decreases cache efficiency, but resource owners don't respect that as much as the PageRank argument. ....Roy
On Jan 2, 2007, at 7:15 AM, Chris Burdess wrote: > Benjamin Carlyle wrote: > > If I PUT several times to <http://example.com/mydocument>, is it > > important to me that the <http://example.com/mydocument;1>, > > <http://example.com/mydocument;2>, and <http://example.com/ > mydocument;3> > > resources are created? My operation has succeeded. Whatever else the > > server chooses to do with my submission is up to it. I don't see any > > need to redirect a client to a specific new document version in > order to > > allow their PUT operation to proceed. > > It does seem to be a bit of a bone of contention. If we assume that > /mydocument *does* change, i.e. that it is equivalent to > /mydocument;current or maybe /mydocument?revision=current , and always > reflects the state of the last change, then the PUT to /mydocument is > only idempotent with respect to /mydocument and not to the entire > namespace. This is a problem since RFC 2616 defines an idempotent > method > in terms of its side-effects not its direct effects, if you see what I > mean. *sigh* Just ignore the definition of idempotent in RFC 2616. Anything specified in HTTP that defines how the server shall implement the semantics of an interface method is wrong, by definition. What matters is the effect on the interface as expected by the client, not what actually happens on the server to implement that effect. ....Roy
Mike, Thanks for your comments. A couple of followup questions for the list: Mike Dierken wrote: > First, it looks like some folks spent a good deal of time writing > extensive documentation - that's very nice. > Here are a couple comments: > > 1 Application ID > The application-id is placed in the URI, but it does not appear to > actually change what data is being accessed - it's not part of a data > identifier. An alternative would be to use the Authorization request > header (where username/passwords normally go) in order to specify the > application ID. > Pro: closer to protocol specification for authentication > Pro: common URI for a resource across applications provide more > chances for caching, hyperlink references, etc. > Con: logging/processing of requests on the server would need to > examine more than just the URI (hopefully this isn't a challenge for > your framework) > The application ID is actually a separate thing from the user ID and is more parallel to User-Agent. I think the user credentials would be the information passed in the Authorization: header, no? Motivation: The application ID is however related to authorization; it's the identity of a third party which is leveraging this API on behalf of an actual user. For example a mashup site would have a unique application ID. This lets us do tracking of who's using the APIs, and also lets us do rate limiting if necessary (say, if someone wrote a virus and attempted a DDOS attack -- obtaining an application ID is a manual process). It's not intended to really be a secure method of authorization, any more than User-Agent is, but useful nonetheless. > ... > > 3 User Sign In > Considering security and privacy of end-users is very good. However, > the approach of 'signing in' seems odd, complex and not at all a > service API. Specifically, the authentication specification describes > only how to redirect a user's web browser to a particular page. For > non-browser based applications, this would be unhelpful. > Essentially, the user's username/password is submitted via HTTPS and a > response has a Location: header with a URI with an auth=[something] > query term. That auth value is then used to generate a token that > (temporarily) identifies the user, which is then later used in other > user-specific requests. > Yes, authentication is a problem. I believe the above is a consequence of wanting to be able to control what the end user sees during authentication, and up to the point of getting an auth token is intended to be the same for all AOL APIs. For example, some applications or user accounts may require two factor authentication, or captcha challenge. (I believe that GData does something very similar.) There are also complex issues involved with signing out and timing out. Finally, there is also an intent to be able to let a user control what data is provided to different services (identified by application ID, again) -- something do-able with a token but not with username/password combinations. The latter seems to be the current default for composing web services but it's not a very good one. (OpenID is another potential option but of course that also involves a round trip through a web page.) Authentication is a very challenging topic and I think it's one that is a going to be a gating factor in deployment of more sophisticated web services of all kinds. Note that the most used web services today don't require authentication, and I think that's partly because there isn't a really good answer for this right now. Any suggestions? -- Abstractioneer <http://feeds.feedburner.com/aol/SzHO>John Panzer System Architect http://abstractioneer.org
John Panzer <jpanzer@...> writes: > Authentication is a very challenging topic and I think it's one that is > a going to be a gating factor in deployment of more sophisticated web > services of all kinds. Note that the most used web services today don't > require authentication, and I think that's partly because there isn't a > really good answer for this right now. Any suggestions? I can see where you're coming from but I can't fully agree. RESTfull authentication is possible (I'm doing it... my app will be announced here this week!) Also, fully secure authentication is simply client certs. Client certs are great, they really work. Ok. They do require a lot of work. But it's good scalable stuff once it's done. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
In 2616*, I interpret "side effects" to mean resource state changes, as opposed to values returned in the response. I think Chris is interpreting "side effects" to mean changes to resources other than the target of the request. Is there any more to the issue than that? Or is my interpretation wrong? Walden * section 9.1.2 specifically ----- Original Message ----- From: "Roy T. Fielding" <fielding@...> To: "Chris Burdess" <dog@...> Cc: "REST Discuss" <rest-discuss@yahoogroups.com> Sent: Tuesday, January 02, 2007 3:16 PM Subject: Re: [rest-discuss] More on versioned resources : On Jan 2, 2007, at 7:15 AM, Chris Burdess wrote: : > Benjamin Carlyle wrote: : > > If I PUT several times to <http://example.com/mydocument>, is it : > > important to me that the <http://example.com/mydocument;1>, : > > <http://example.com/mydocument;2>, and <http://example.com/ : > mydocument;3> : > > resources are created? My operation has succeeded. Whatever else the : > > server chooses to do with my submission is up to it. I don't see any : > > need to redirect a client to a specific new document version in : > order to : > > allow their PUT operation to proceed. : > : > It does seem to be a bit of a bone of contention. If we assume that : > /mydocument *does* change, i.e. that it is equivalent to : > /mydocument;current or maybe /mydocument?revision=current , and always : > reflects the state of the last change, then the PUT to /mydocument is : > only idempotent with respect to /mydocument and not to the entire : > namespace. This is a problem since RFC 2616 defines an idempotent : > method : > in terms of its side-effects not its direct effects, if you see what I : > mean. : : *sigh* : : Just ignore the definition of idempotent in RFC 2616. Anything : specified in HTTP that defines how the server shall implement the : semantics of an interface method is wrong, by definition. What : matters is the effect on the interface as expected by the client, : not what actually happens on the server to implement that effect. : : ....Roy : : : : __________ NOD32 1953 (20070102) Information __________ : : This message was checked by NOD32 antivirus system. : http://www.eset.com : :
Mike Schinkel wrote: > The internal admins configuring the firewall in many places are clueless > (believe me, I ran a business for a while where that was truly the case but > I couldn't afford to hire anyone better. And it will be the case in many > small to medium size businesses.) The market will deal with businesses like that. I don;t see why genuinely competent organizations should have to put up with bad architectures to support their pointy-haired competitors. It seems suspicious to me that these purported admins who are so incompetent they can't properly manage PUT and DELETE knew enough to block these methods in the first place. I suspect what may really be going on is unchanged defaults in the firewalls and proxy servers. If indeed that's the case then it's much easier to fix the problem at its source by educating a relatively small number of proxy and firewall vendors. Indeed all you may need to do when a customer tells you your system seems broken is say, "Oh, you're using proxy X? That's broken and non-spec compliant. Use proxy Y instead and all will be fine." Of course, this is not a binary situation. Some will fix their systems and some won't. My experience leads me to believe that when you insist on spec compliance, more people will fix their systems and come into compliance than won't. You will lose a few percent. However if you try and support all the broken and brain damaged networks out there, you do far more damage to everyone. You end up hurting the compliant customers to support the noncompliant ones in a dozen different, subtle ways. Maximum net benefit to all involved is achieved by jettisoning the truly incompetent organizations that will not and cannot learn the proper way to do things. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
I think the answer you're looking for is found in the explanation often given about GET and side-effects; that the client didn't request them and so can't be held accountable. Similarly, from the POV of this thread, if a server non-idempotently "versions" a resource on a PUT, the client isn't requesting that action, the server just decided to do it, and therefore it's not a violation of PUT semantics ... as long as the server also sets the state of the targetted resource to that represented in the request, of course. I've understood this distinction for some time, but it still catches me on occasion... Mark. On 1/2/07, Walden Mathews <waldenm@...> wrote: > In 2616*, I interpret "side effects" to mean resource state changes, > as opposed to values returned in the response. I think Chris is > interpreting > "side effects" to mean changes to resources other than the target > of the request. Is there any more to the issue than that? Or is my > interpretation wrong? > > Walden > > * section 9.1.2 specifically > > ----- Original Message ----- > From: "Roy T. Fielding" <fielding@...> > To: "Chris Burdess" <dog@...> > Cc: "REST Discuss" <rest-discuss@yahoogroups.com> > Sent: Tuesday, January 02, 2007 3:16 PM > Subject: Re: [rest-discuss] More on versioned resources > > > : On Jan 2, 2007, at 7:15 AM, Chris Burdess wrote: > : > Benjamin Carlyle wrote: > : > > If I PUT several times to <http://example.com/mydocument>, is it > : > > important to me that the <http://example.com/mydocument;1>, > : > > <http://example.com/mydocument;2>, and <http://example.com/ > : > mydocument;3> > : > > resources are created? My operation has succeeded. Whatever else the > : > > server chooses to do with my submission is up to it. I don't see any > : > > need to redirect a client to a specific new document version in > : > order to > : > > allow their PUT operation to proceed. > : > > : > It does seem to be a bit of a bone of contention. If we assume that > : > /mydocument *does* change, i.e. that it is equivalent to > : > /mydocument;current or maybe /mydocument?revision=current , and always > : > reflects the state of the last change, then the PUT to /mydocument is > : > only idempotent with respect to /mydocument and not to the entire > : > namespace. This is a problem since RFC 2616 defines an idempotent > : > method > : > in terms of its side-effects not its direct effects, if you see what I > : > mean. > : > : *sigh* > : > : Just ignore the definition of idempotent in RFC 2616. Anything > : specified in HTTP that defines how the server shall implement the > : semantics of an interface method is wrong, by definition. What > : matters is the effect on the interface as expected by the client, > : not what actually happens on the server to implement that effect. > : > : ....Roy > : > : > : > : __________ NOD32 1953 (20070102) Information __________ > : > : This message was checked by NOD32 antivirus system. > : http://www.eset.com > : > : > > > > > Yahoo! Groups Links > > > > -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
On Jan 2, 2007, at 4:42 PM, Mark Baker wrote: > I think the answer you're looking for is found in the explanation > often given about GET and side-effects; that the client didn't request > them and so can't be held accountable. Similarly, from the POV of > this thread, if a server non-idempotently "versions" a resource on a > PUT, the client isn't requesting that action, the server just decided > to do it, and therefore it's not a violation of PUT semantics ... as > long as the server also sets the state of the targetted resource to > that represented in the request, of course. > > I've understood this distinction for some time, but it still catches > me on occasion... Yes, that's it. We have to keep dancing around that bush because terminology is a committee-driven process. Everyone has an opinion and so no opinion is spec'd consistently. ....Roy
Roy T. Fielding wrote: > > Steve G. Bjorg wrote: > > > Mike Schinkel wrote: > > > > > Ah yes, the ugly '@' prefix. It's to avoid naming clashes > > between > > > > > built-in and custom URIs. > > > > > > > > I definitely wouldn't use something that requires encoding. > > > > > > It doesn't. According to [1] (section 3.3), the '@' character is > > > legal in the URI path and does not require escaping. > > > [1] http://www.ietf.org/rfc/rfc2396.txt > > > > I didn't check rfc 2396, but rfc 3986 obsoletes it. Reading from > > "Reserved Characters" section[1] it appears that you should percent > > encode the '@' > > character, but I could be misreading. Can anyone on the > list with more > > experience tell me if I'm interpreting it incorrectly, and > if so why? > > A URI generator should percent-encode reserved characters > when they are not being used as delimiters, so that other > layers that do use them as delimiters can treat them > specially without misinterpreting normal data (e.g., a name > that just happens to start with @ would be encoded, whereas > the delimiter used by Dream would not). > > In other words, Dream is using a reserved character correctly. > Whether or not @ is the best choice for that reserved > character is a long story and probably depends on what URIs > are allowed (some use @ in the path for other things) and > whether or not it interfaces with XPath processing (which > uses @ for attribute names). Interesting. Hmmm. I'm going to have to mull on that one a bit. Is it a being used as a delimiter because it denotes a special class of URLs? That would not have occurred to me. I keep learning something new on this list everyday... Thanks. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Roy T. Fielding wrote: > On Jan 2, 2007, at 9:19 AM, Mike Schinkel wrote: > > Walden Mathews wrote: > > > Maintaining multiple URIs for the same resource is not best > > > practice. For the same reason you want uniformity in methods, you > > > want one preferred URI for identifying a resource. The > server should > > > redirect requests for other equivalent URIs to the preferred one. > > > Althought that's not quite what you are asking, I think it is the > > > answer. > > > > I'm curious why you make the above statement. Is there a > W3C finding > > that documents this, or some other paper? I've been doing a > > significant amount of research on URLs and URI best > practices in the > > past 3+ months and have found nothing like this (though I > could have > > missed it.) > > It is inherent in the power laws of economics, the network > effect of Metcalfe, the PageRank of Google, and I am sure it > is mentioned somewhere in webarch. A gave a presentation to > the W3C Tech Plenary that described it using a single graph > that is bisected when other sites start referring to two > separate URIs for the same resource. > > When the best resources on the Web are identified by the > sites that link to them, introducing multiple URIs for the > same resource has the effect of exponentially decreasing its > perceived value. > > It also decreases cache efficiency, but resource owners don't > respect that as much as the PageRank argument. Can you please address the usability issue I raised? You addressed the technical issues, but not the usability issues. Wouldn't this argue for additional authoritative metadata in HTTP headers and HTML <head> elements that both browsers and search engines can use to address the issues you raise? FYI, I've been doing a lot of research in this area, reading everything I could find the web and more on the subject, and have plans to publishing a lot of comments regarding this as well as and make calls for this metadata on the WDUI blog [1] over the next year. I've even come across documents you've written that I believed would imply such a thing (but their URLs are not readily available; I still have lots of information sorting and classifying left to do. But I will reference those writing when I publish.) Hopefully you'll agree there is a need for better metadata, especially from web apps (blogs, wikis, CMS systems, etc.) that can correctly manage such metadata. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ [1] http://blog.welldesignedurls.org/
Steve G. Bjorg wrote: > I've created a short tutorial on REST and some common > resource/service patterns. The tutorial is based on my > experience with designing the API for our DekiWiki > application [1]. > > I would welcome feedback on the accuracy of the content. > The tutorial will be used to introduce developers to REST > and establish a common framework for designing services. > http://doc.opengarden.org/Articles/REST_for_the_Rest_of_Us WHOA! I opened it and went to click your "Print Page" and up popped THE MOST AWESOME print dialog I have EVER SEEN on any website, and I print a lot. NICE! -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ P.S. Mind if I steal the idea from you? Man, I'd love to see that a WordPress plugin.
> The application ID is actually a separate thing from the user ID and is more parallel to User-Agent. > I think the user credentials would be the information passed in the Authorization: header, no? Yes, credentials go in the Authorization header. I'm not sure have distince user-agent values is useful in this case. The Authorization header can easily contain both the application-id and some transient token authorizing that request. The server could reject the request on an application-wide basis as well as per-resource (for example, if a user disallows a particular application from mucking with their stuff). > Finally, there is also an intent to be able to let a user control what data is provided to different services > (identified by application ID, again) -- something do-able with a token but not with username/password combinations. What if the request contained the username/password of the application, not the end-user. And the user could allow/deny access on a per-application basis. The user would not need to re-login for each and every third-pary application (which might make them de-sensitized to guarding their password). ________________________________ From: John Panzer [mailto:jpanzer@...] Sent: Tuesday, January 02, 2007 12:58 PM To: Mike Dierken Cc: Nic James Ferrier; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] AOL and REST? Mike, Thanks for your comments. A couple of followup questions for the list: Mike Dierken wrote: First, it looks like some folks spent a good deal of time writing extensive documentation - that's very nice. Here are a couple comments: 1 Application ID The application-id is placed in the URI, but it does not appear to actually change what data is being accessed - it's not part of a data identifier. An alternative would be to use the Authorization request header (where username/passwords normally go) in order to specify the application ID. Pro: closer to protocol specification for authentication Pro: common URI for a resource across applications provide more chances for caching, hyperlink references, etc. Con: logging/processing of requests on the server would need to examine more than just the URI (hopefully this isn't a challenge for your framework) The application ID is actually a separate thing from the user ID and is more parallel to User-Agent. I think the user credentials would be the information passed in the Authorization: header, no? Motivation: The application ID is however related to authorization; it's the identity of a third party which is leveraging this API on behalf of an actual user. For example a mashup site would have a unique application ID. This lets us do tracking of who's using the APIs, and also lets us do rate limiting if necessary (say, if someone wrote a virus and attempted a DDOS attack -- obtaining an application ID is a manual process). It's not intended to really be a secure method of authorization, any more than User-Agent is, but useful nonetheless. ... 3 User Sign In Considering security and privacy of end-users is very good. However, the approach of 'signing in' seems odd, complex and not at all a service API. Specifically, the authentication specification describes only how to redirect a user's web browser to a particular page. For non-browser based applications, this would be unhelpful. Essentially, the user's username/password is submitted via HTTPS and a response has a Location: header with a URI with an auth=[something] query term. That auth value is then used to generate a token that (temporarily) identifies the user, which is then later used in other user-specific requests. Yes, authentication is a problem. I believe the above is a consequence of wanting to be able to control what the end user sees during authentication, and up to the point of getting an auth token is intended to be the same for all AOL APIs. For example, some applications or user accounts may require two factor authentication, or captcha challenge. (I believe that GData does something very similar.) There are also complex issues involved with signing out and timing out. Finally, there is also an intent to be able to let a user control what data is provided to different services (identified by application ID, again) -- something do-able with a token but not with username/password combinations. The latter seems to be the current default for composing web services but it's not a very good one. (OpenID is another potential option but of course that also involves a round trip through a web page.) Authentication is a very challenging topic and I think it's one that is a going to be a gating factor in deployment of more sophisticated web services of all kinds. Note that the most used web services today don't require authentication, and I think that's partly because there isn't a really good answer for this right now. Any suggestions? -- Abstractioneer <http://feeds.feedburner.com/aol/SzHO> John Panzer System Architect http://abstractioneer.org
I think that authenticating a client application isn't the hard problem here - I think it's the end-user authorizing that particular application to muck about with their data. Causing the user to re-enter username/password in a web page for each application implicitly is granting that application access. The 'shared secret' part of AOLs video service APIs is how they verify that application is authentic. > -----Original Message----- > From: Nic James Ferrier [mailto:nferrier@...] > Sent: Tuesday, January 02, 2007 2:01 PM > To: John Panzer > Cc: Mike Dierken; rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] AOL and REST? > > John Panzer <jpanzer@...> writes: > > > Authentication is a very challenging topic and I think it's > one that > > is a going to be a gating factor in deployment of more > sophisticated > > web services of all kinds. Note that the most used web > services today > > don't require authentication, and I think that's partly > because there > > isn't a really good answer for this right now. Any suggestions? > > I can see where you're coming from but I can't fully agree. > > RESTfull authentication is possible (I'm doing it... my app > will be announced here this week!) > > Also, fully secure authentication is simply client certs. > Client certs are great, they really work. Ok. They do require > a lot of work. But it's good scalable stuff once it's done. > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
> This introduction will cover some basics of HTTP [...] You probably should mention that REST does not mean "HTTP" and mention why you chose to talk about HTTP (like, it display many of the characteristics described by the REST style). > Compare this to mounted network drives, whose names have no way of being resolved by a recipient. Huh? I would need an explanation of how that's similar, so this could be clarified or replaced with something more understandable. > Similarly, we can also query a complex resource, such as a blog, to get parts of the data: This isn't querying a resource, it simply /is/ a resource. You might say something like a resource can represent complex information like "all the blog comments after a particular timestamp written by 'anonymous' ". The client doesn't need to know just how complex things could be for the server - it just does a GET. > All data resources support GET, but not all behavioral resources do. What's a "data resource" compared to "behavioral resource" - you might rephrase that to say that some resources are read-only. > While a GET request cannot have side-effects, it can return only parts of the resource. > This means that GET is both an atomic read, as well as a query operation. This is not correct - GET does not return parts of a resource. At least, not the way you suggest with query terms. You might want to describe that a GET could be a simple static file or a complex query operation - both are simply retrievals of data. > If two PUT operations occur simultaneously, one of them will win and determine the final state of the resource. What happens to the other one? I think that both win. REST gives rise to all sorts of win-win situations. > In the case of two simultaneous DELETE operations, one of them may fail since the resource > will have already been deleted. Nope. It's important to realize that /both/ succeed. There should be no error code. The protocol does not assume a "locate object or fail, activate object or fail, invoke method on object" - it just says "make sure the resource is gone when the request completes". An absent resource is just as gone as a recently deleted one. > The POST operation is very generic and no specific meaning can be attached to it. There is meaning associated with it, and unique aspects regarding repeatability and caching. > In general, use POST when only a subset of a resource needs to be modified and it cannot be accessed > as its own resource; or when the equivalent of a method call must be exposed. Rather than suggesting a generic method calls is appropriate via POST, perhaps you could narrow it down to modification operations that don't fit into PUT or DELETE. > We can also update individual properties of the entry, because they are exposed as nested resources. There isn't a concept of a 'nested resource'. And this example only works because the server implements it that way, it's not part of REST or HTTP (or even most Web server frameworks unfortunately). And there is no requirement that the second resource that is the 'property of the entry' use a URI with paths - it could just as easily be: PUT http://myserver/myaddressbook/johndoe?prop=27 PUT http://myserver/myaddressbook/phones?user=johndoe&type=work > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Steve G. Bjorg > Sent: Monday, January 01, 2007 9:28 PM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Request for feedback: REST for the Rest of Us > > I've created a short tutorial on REST and some common > resource/service patterns. The tutorial is based on my > experience with designing the API for our DekiWiki application [1]. > > I would welcome feedback on the accuracy of the content. The > tutorial will be used to introduce developers to REST and > establish a common framework for designing services. > http://doc.opengarden.org/Articles/REST_for_the_Rest_of_Us > > Thanks in advance for taking a look at it. > > > Cheers, > > - Steve > > [1] http://doc.opengarden.org/DekiWiki_API/Reference/DekiWiki > > -------------- > Steve G. Bjorg > http://www.mindtouch.com > http://www.opengarden.org > > > > > Yahoo! Groups Links > > >
S. Mike Dierken wrote: > ... > >>Finally, there is also an intent to be able to let a user control what data is provided to different services (identified by application ID, again) -- something do-able with a token >> >> >but not with username/password combinations. > >What if the request contained the username/password of the application, not >the end-user. And the user could allow/deny access on a per-application >basis. The user would not need to re-login for each and every third-pary >application (which might make them de-sensitized to guarding their >password). > > > This is equivalent to giving control over both authentication and authorization to third parties; we'd have to establish a trust relationship, and likely a business relationship, with every third party to make this work. It's not a very scalable solution, I think. Note that the third party would still need to at least pass in an additional user ID recognizable by our system in order to deal with per-user data (like my photos or bookmarks or ...). -- Abstractioneer <http://feeds.feedburner.com/aol/SzHO>John Panzer System Architect http://abstractioneer.org
On Tue, 2007-01-02 at 12:19 -0500, Mike Schinkel wrote:
> Walden Mathews wrote:
> > Maintaining multiple URIs for the same resource is not
> > best practice. For the same reason you want uniformity in
> > methods, you want one preferred URI for identifying a
> > resource. The server should redirect requests for other
> > equivalent URIs to the preferred one. Althought that's not
> > quite what you are asking, I think it is the answer.
> Consider another set of URLS:
> http://www.mycarsite.com/{make}/{model}/{year}/
> But it's also logical to have:
> http://www.mycarsite.com/{year}/{make}/{model}/
> Or even:
> http://www.mycarsite.com/{make}/{year}/{model}/
They aren't the same resource. They are three different resources that
demarcate the same state :)
In practice you will want a "permalink" that can be bookmarked
consistently and referred to consistently elsewhere. The data from that
resource might be available elsewhere, but it still has a one true home
that can be used as its identifier. The other resources that refer to
the same state should normally include a link to this permanent home.
Benjamin
Benjamin Carlyle wrote:
> > Consider another set of URLS:
> > http://www.mycarsite.com/{make}/{model}/{year}/
> > But it's also logical to have:
> > http://www.mycarsite.com/{year}/{make}/{model}/
> > Or even:
> > http://www.mycarsite.com/{make}/{year}/{model}/
>
> They aren't the same resource. They are three different
> resources that demarcate the same state :)
hehe
> In practice you will want a "permalink" that can be
> bookmarked consistently and referred to consistently
> elsewhere.
Permalink to what? All three?
> The data from that resource might be available
> elsewhere, but it still has a one true home that can be used
> as its identifier. The other resources that refer to the same
> state should normally include a link to this permanent home.
I agree with that point,... except. Not I really do, that had been my
conclusion. But as I look at this example I created, how does one decide
which is canonical? And since each has different breadcrumbs, they are
arguably different, depending on the use-case for the person bookmarking.
Frankly, I think I have a lot of thinking to do on this issue...
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org/
On 1/3/07, Benjamin Carlyle <benjamincarlyle@...> wrote:
> On Tue, 2007-01-02 at 12:19 -0500, Mike Schinkel wrote:
> > Consider another set of URLS:
> > http://www.mycarsite.com/{make}/{model}/{year}/
> > But it's also logical to have:
> > http://www.mycarsite.com/{year}/{make}/{model}/
> > Or even:
> > http://www.mycarsite.com/{make}/{year}/{model}/
>
> They aren't the same resource. They are three different resources that
> demarcate the same state :)
>
Why aren't they the same resource?
Michael
On Tue, 2007-01-02 at 12:31 -0600, Hugh Winkler wrote: > On 12/31/06, Benjamin Carlyle <benjamincarlyle@...> wrote: > > On Sat, 2006-12-30 at 19:08 -0600, Hugh Winkler wrote: > > > On 12/30/06, Benjamin Carlyle <benjamincarlyle@...> wrote: > > > > I'm not sure who you think will interpret this form on the client > > > side. > > > > You see, the thing is that the atom document format is already a > > > form. A > > > > client has already received the form by being programmed in a > > > particular > > > > way, and is now submitting the form. It knows what information > > > should be > > > > placed in each of the named fields. It knows how to construct the > > > end > > > > document. > > > Ah, but it is a standard form for all servers -- not for my particular > > > server app. And to know what kind of form to submit, you read a spec > > > -- you did not GET the description from the server. It's all baked in > > > at design time. > > Exactly. It has been agreed ahead of time. > That's the problem --- "ahead of time" rather than dynamically at run time. What you have to understand here, is that you are trying to replace human agreement with machine agreement. Humans are good at negotiating standards like atom. It's a hard problem, but standards get nailed out. Information producers agree to transform their internal models into the standard format. Information consumers agree that the standard format is a suitable source of data for their internal models. Internal models often have to bend as part of this process, and eventually realign around a competent standard to be more simliar than they are different. Machines are not good at negotiation. You give a machine a form to fill out, and the machine already needs to know how to fill out the form before it starts. The form says "title", "summary", "content". The machine already needs an internal model that has those elements. The form says "don't give me the summary", the client could have code written to say "only send the subset of the standard which the server says it can accept". That's as good as you can do. You can't give a client an arbitrary form that isn't a simple subset and expect it to know what to do. If the client software wasn't written to know that only one author might be supported on the other side, it can't choose which author it should supply any better than the server side. You are lucky if it can interpret the server's instructions not to supply more than one at all. You can't arbitrarily place restrictions on the client as to how it should fill out its content. The only practical way to do it is to write a program that the client must run over a standard atom data model in order to fit the server's point of view. And guess what: That's the same program you would run on the server side if the client just submitted the atom document in the first place. Clients can't deal with unexpected server demands. Server demands are only expected if they are negotiated between humans, which is to say they are part of the atom specification. You can't do any better than what is in the standard by supplying a form. > >Now, I can just send my POST > > request and know that the server understands a useful subset of what I > > am sending them. > Not sure how you would know it understands a useful subset. You won't > have the slightest idea what parts of your document it understands, or > doesn't. doesn't handle multiple authors? Can only accept text/plain > for <title>? I know it understands because we agreed on the content through the atom standardisation process. We agreed that I would send this much and the server would understand that much. Whether understand means "completely model" is up to the server. It is free to cut the xhtml out of its title. It is even free to use the xhtml content as text/plain. That's its perrogative. What is not in its perrogative is to reject a well-formed and valid atom document. If it intends to do that it should not claim to understand atom in the first place. > > What you are suggesting is that I first need to obtain a schema document > > (which you are calling a form) > A schema document is not a form. Maybe we should stay away from XML > for a moment. Think HTML form... which is a little "program" telling > the UA how to serialize a submission. You are describing the set of valid documents I can submit to you. You can call it a form if you like, but it is more correctly a schema. If you are no longer talking about a schema, and are now talking about a program to transform my atom content into your sub-atom content... then why aren't you running that program on the server side? > > to see if the server is actually > > understanding only a subset of the atom vocabulary. > Not a subset... could be a superset. Now you are talking about the client supplying more elements than it knows how to supply. You are presumably talking about extensions to the standard, but extensions are standards too. Extensions require human agreement between client and server in order to be understood. > > Then I need to > > customize my content to conform to this subset. As a machine, I don't > > have any good way of doing that. > See, you have this problem anyway. If you are sending a server stuff > it doesn't understand, or not enough stuff, your application will > fail. But I have already agreed through the standardisation process with the server that it will understand my content. My application will only fail if the server fails to implement the specification. > With forms, at least, your application knows "Hey, I don't know how to > fill out this required field". Same as a human would using an HTML > form. Your client can report that to a human for correction. It only knows if I write code. I only write code if I have communicated with the guy who wrote the server about what is permissible. I have already done this. We called that conversation the atom standardisation process. Why do you think a human in the loop can do anything about the failure to communicate? Are they going to hack on their client application every time a server says it only understands an unexpectedly-small subset of atom or demands an extension element be supplied? No... if there is a human in the loop she will write an email to the server's administrator to inform him of his bug in failing to implement the specification. It is not the client's problem. It is the server's problem. > Atom (and any application protocol based on exchanging known document > types) has to trade off between exhaustively specifying application > behavior and exhaustively specifying failure handling. And exhaustively supporting forwards-compatability for extensions, and exhaustively trading server and client-side complexity for protocol features. > >If I come up to a server that has > > support for an unexpectedly small subset of atom, I then have to > > customize my content in an unexpected way. > It is better for you to do the customizing. TTake the example of a > server that simply cannot honor text/html or application/xhtml+xml in > the title field. It can only handle text/plain. Atom protocol says > nothing at the moment about this situation, except that the server can > change your POSted data as it needs to. So presently my server either > a) rejects your submission or b) stores it as text/plain. Better > would be for your client to receive an Xform with a constraint > specifiying "text/plain" only -- then, if the user had any imprtant > rich content they wanted to put in the title, they can at least try to > compensate. No no no.... the client doesn't know how to customise the content. As the author of the client I relied on the atom specification that says I can supply a content element and I did. Now your server is telling me it doesn't understand it and wants a summary element instead? My client is not written to deal with that. The server can deal with its own shortcomings, thankyou. If it doesn't understand the protocol it should stop speaking it. If the server doesn't understand xhtml in the title, then tough! It knows that it must be expected to deal with xhtml in the title because we agreed through the standardisation process that it should be capable. How it provides that support as the atom spec rightly points out is up to the server. Maybe it will strip out anything in angle-brackets before storing the value into its internal title variable. Maybe it will just use the xhtml content verbatim. That's the server's problem. It isn't permitted to reject my submission. How is my client supposed to know that what it needs to do in the face of this dumb server? We already agreed that xhtml was fine, and now this server wants to go back on that? Move complexity to me, will you? No thanks. I'll find another server to talk to. Maybe one that understands xhtml in the title. > >That is to say, I way that > > noone programmed me to customise my content in :) > Well, you would have programmed it from this pov, so you would have > handled these exceptional situations. I don't write client software to deal with broken servers that don't implment the spec. It is up to the server to deal with the problem if it can't translate my request precisely. > > The server, on the other hand, is in a good position to customise the > > content. It knows which subset of atom it understands, and it knows what > > atom is generally. It knows multiple authors might be required, so is in > > a position to either model those multiple authors or use an algorithm to > > select an author for its model from the available list. > See above. Yes, you are describing the undesirable behavior the > current APP forces you into. Quite the opposite. Your suggestion doesn't hold water. The spec reflects decades of experience in developing protocols that work and can evolve succesfully for decades to come. Do you think you can do better without having written software on more than one side of the client/server fence? > > > What I proposed is that the form delivered by a server have just the > > > elements that make sense for the server. My server might not know what > > > to do with a <source> element. Using the standard "form", my server > > > has to have handling in place if you submit a <source> element, and it > > > has to describe to you the problem if you do submit <source> and my > > > server rejects it. > > All atom elements make sense to the server, even if they don't fit the > > server's internal model. > > The server implements atom, after all. > not so -- the client may submit extgension elements the server isn't aware of. And the server is required to ignore them and the client is required to accept that old servers will ignore them. That's the way extensible protocols work. If the extension is good it will be supported. If it is not it will be ignored, sidelined, and eventually forgotten. > > What you > > are suggesting is adding an extra message exchange to move the > > complexity of fitting atom into the server's model back to the client. > > Neither side is going to be great at solving a model mismatch problem, > > but the client is likely to be downright incapable. > As above: The client is in the best position to take corrective > action, so as to best fulfill its intent. Show me the code. Your solution requires me to write client-side code every time a new kind of dumb server is placed on the internet. That doesn't work. That doesn't scale. Machines can't negotiate, only humans can... and we already have. If you have anything more to add to that conversation you had better to do it. Don't try to hold a separate conversation with my client software. It doesn't know how to hold that conversation. > Forms made the web adaptive. Go to any airline reservation web site. > They're all the same, but different too. They mostly do the same > things, but Orbitz offers packages with hotels and cars, while Delta > offers connections with partner airlines, and Priceline won't let you > see the itinerary. There's some shared vocabulary among all those > sites but varying behavior. If the airlines standardize their > vocabulary for forms, you could program a client to interact, > adaptively, with all those sites. But you could not constrain > Priceline's app to squeeze into the same behavior model as Delta's. > And you shouldn't -- you should encourage diversity among web apps, as > has been successful on the web to date. The evolvability of HTML and HTTP are what have been successful on the web to date. Atom follows the evolvability and agreement model of its predecessors. Do you really want your web server to have to retrieve a form from every browser that requests a html page before it can be returned? When that form says "I don't understand paragraph markers", what will your web server do to make the content fit? Benjamin.
On Wed, 2007-01-03 at 05:52 -0500, Mike Schinkel wrote:
> Benjamin Carlyle wrote:
> > > Consider another set of URLS:
> > > http://www.mycarsite.com/{make}/{model}/{year}/
> > > But it's also logical to have:
> > > http://www.mycarsite.com/{year}/{make}/{model}/
> > > Or even:
> > > http://www.mycarsite.com/{make}/{year}/{model}/
> > In practice you will want a "permalink" that can be
> > bookmarked consistently and referred to consistently
> > elsewhere.
> Permalink to what? All three?
A permalink to the content, as viewed by the server-side.
> > The data from that resource might be available
> > elsewhere, but it still has a one true home that can be used
> > as its identifier. The other resources that refer to the same
> > state should normally include a link to this permanent home.
> I agree with that point,... except. Not I really do, that had been my
> conclusion. But as I look at this example I created, how does one
> decide
> which is canonical? And since each has different breadcrumbs, they are
> arguably different, depending on the use-case for the person
> bookmarking.
The server side can decide what is canonical. We are blessed these days
with the example of the blogosphere, where the canonical url differs
from site to site. Each uses some balance between the needs of the
service provider and the needs of the service consumers.
The client can still bookmark whichever they like, but the server only
guarantees the ongoing existence of the url of its choosing. If the
client wants to refer to the content in a permanent way, it should refer
to it via the permanent link. Even if the server's guarantee is present
on all three urls, the server is still free to provide a "preferred" url
it suggests be used in these references.
> > They aren't the same resource. They are three different resources
> > that demarcate the same state :)
Michael Walter wrote:
> Why aren't they the same resource?
Yes, I was only half-joking. Perhaps cyclically, they are different
resources because they have different urls. They are identified
differently, so the server is permitted to change their content
independently. Only the server knows they demarcate the same state as
each other and will always have the same representations as each other.
Clients are not normally permitted to associate them as equivalent
resource, though the server is permitted to provide assurances they they
are the same through mechanisms such as owl:sameAs.
Benjamin.
What method should I use to implement a preview function in a CMS? I've got an API like this: GET /someresource return 200 with the latest version of the resource POST /someresource [content] create a new version of /someresource with the specificed content. return 201 with the new version subsidiary resource, eg: /someresource/0.11 But what if I want to have a preview of the content before I commit it? Right now I've got: POST /someresource?preview [content] return 200 with what the resource would look like But I'm unsure if this is right. POST should imply some representational state transfer and this is not state transfer. It's just look ahead to state transfer. GET /somresource?preview is no good because I can't send the content. Any thoughts? -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On Wed, 2007-01-03 at 15:08 +0000, Nic James Ferrier wrote:
> What method should I use to implement a preview function in a CMS?
[...]
> Right now I've got:
>
> POST /someresource?preview [content]
> return 200 with what the resource would look like
>
> But I'm unsure if this is right. POST should imply some
> representational state transfer and this is not state transfer. It's
> just look ahead to state transfer.
Don't you submit the updated content so the server can render it as a
preview? Sounds like transferring a representation of the "Preview"
state to me. This seems fine.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org;echo ${a}@${b}
Benjamin Carlyle wrote: > * In REST it is likely that multiple resources will be served from a > single defined set of state, ie a service. For example an object or a > set of objects are likely to have several resources associated with > them > that act as the network interface to these objects. I would suggest > that > instead of "Each Dream service (being) a resource", each dream service > should make a set of resources available. You're absolutely right: a service can provide as many resources as its wants. I was thinking in terms of implementation since a service is instantiated, but that was wrong for this context. I've made note of it on the feedback page. > * Returning a dream blueprint to users of the system doesn't read > to me > as useful. If the blueprint exposure is an internal deployment > mechanism, this is ok. The blueprint appears to be an internal > document > that allows particular methods on particular resources to be mapped to > other method invocations on internally-defined objects. This is fine, > but should not normally be communicated to users of the interface. > Users > should see published lists of URIs that meet particular > requirements the > uses may have. Perhaps a cut-down form of the blueprint would be more > appropriate than one that exposes internal classes and the like. The blueprint is used by the framework to instantiate the service. So, the primary consumer of the blueprint is the framework. However, it's also used by /@inspect to return a HTML document about the service and its features. This human-readable document makes it easy to learn about new services and also provides additional links to online documentation. > * It looks like access to HTML headers and the like may be limited. > Where is the url returned in a POST's Location header in > <http://doc.opengarden.org/Dream_SDK/Tutorials/Address_Book>? That sample was intended for simplicity, not completeness. The basic communication unit is a HTTP message containing the headers and body. Common headers such as ContentType, ContentLength, etc. are exposed as properties, but arbitrary headers can also be set via the Headers[ ] property. -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Mike, Thanks a lot for taking the time to read it and provide such detailed feedback. I've captured all of it at: http://doc.opengarden.org/Articles/REST_for_the_Rest_of_Us/Feedback This is really good stuff and catching my mistakes early before I corrupt others is crucial. I strive to be part of the solution, not the problem! ;) Cheers, - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
I don't understand how this gives control to third parties.
What I read here http://developer.searchvideo.com/RESTAPIAuthSpec.php
indicated that both and application-id and 'shared secret' are provided by
AOL. It appears that requests must provide a 'signature' which is based in
part on that shared secret - the AOL service must also know this shared
secret to authenticate that signature, so it seems that authentication is
performed by the AOL service.
The current approach involves the end user to provide authorization every 60
minutes through the process of displaying the appropriate web pages hosted
by AOL - by not providing their username/password the user essentially does
not authorize access. I was just suggesting that this authorization be more
explicit and the end-user could even see a list of applications and whether
they are authorized or not.
> Note that the third party would still need to at least pass in an
additional user ID recognizable by our system
> in order to deal with per-user data (like my photos or bookmarks or ...).
Yes - that is desired. Some of the documented APIs call this a token:
http://api.searchvideo.com/apiv3
?method=truveo.users.getFavoriteVideos
&appid=MY_APPID
&token=USER_TOKEN
&start=0
&results=10
&showRelatedItems=1
&sig=[signature]
_____
From: John Panzer [mailto:jpanzer@...]
Sent: Tuesday, January 02, 2007 10:37 PM
To: S. Mike Dierken
Cc: 'Nic James Ferrier'; rest-discuss@yahoogroups.com
Subject: Re: [rest-discuss] AOL and REST?
S. Mike Dierken wrote:
...
Finally, there is also an intent to be able to let a user control what data
is provided to different services (identified by application ID, again) --
something do-able with a token
but not with username/password combinations.
What if the request contained the username/password of the application, not
the end-user. And the user could allow/deny access on a per-application
basis. The user would not need to re-login for each and every third-pary
application (which might make them de-sensitized to guarding their
password).
This is equivalent to giving control over both authentication and
authorization to third parties; we'd have to establish a trust relationship,
and likely a business relationship, with every third party to make this
work. It's not a very scalable solution, I think. Note that the third
party would still need to at least pass in an additional user ID
recognizable by our system in order to deal with per-user data (like my
photos or bookmarks or ...).
--
<http://feeds.feedburner.com/aol/SzHO> AbstractioneerJohn Panzer
System Architect
http://abstractioneer.org
"Mike Schinkel" <mikeschinkel@...> wrote: > > Benjamin Carlyle wrote: > > On the whole the framework doesn't look particularly RESTful > > or non-RESTful. It would be up to a particular blueprint and > > implementation to conform or not conform to REST principles. > > If it is designed for interaction with HTTP and allows > > appropriate access to HTTP headers I don't see any obvious > > problems on the surface. > > I'm curious; could RESTfulness be more strongly enforced by the framework? > The first goal for Dream is to make it easier to associate code with REST concepts. Currently, we have nice pattern matching on HTTP methods and URIs, but still lack content-type matching. Well, 2 out of 3 is not a bad start. The unit of communication is messages, which contain a status code, headers, and a body. Messages are modeled after HTTP messages, although they are not tied to any specific implementation, making the Dream runtime more portable. Finally, RESTful design is promoted by providing guidelines (such as REST for the Rest of Us) and examples. The framework itself permits developers to implement non-RESTful services, just like an object-oriented language facilitates non object-oriented code. The question of restricting design freedom is a tough one. It seems that even the best arguments are met with unusual, but necessary exceptions to the rules. My ongoing goal for Dream will be to facilitate tapping in to the existing richness of HTTP and make it easier for developers to build applications with it. Cheers, - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
I think a lot of the things you are objecting to actually are good rules of thumb that joe-blow-developer can easily understand and follow. While these rules aren't necessarily constraints imposed by REST, they aren't in conflict with it either. Most of the article's content is characterized as "patterns" as opposed to rules or concepts defined by REST. So the author is being fairly clear about this IMO. I think this is a really good idea and is much needed. The average developer would probably appreciate and understand these simple rules and patterns a lot more easily than an Architectural Style. As long as the author is clear that these patterns are not defined by REST and that there are other ways to build RESTful interfaces then I think its fine. Many people learn by example, and at this point the bad examples outnumber the good. We need more good examples. Andrew Wahbe --- In rest-discuss@yahoogroups.com, "S. Mike Dierken" <dierken@...> wrote: > > > This introduction will cover some basics of HTTP [...] > You probably should mention that REST does not mean "HTTP" and mention why > you chose to talk about HTTP (like, it display many of the characteristics > described by the REST style). > > > Compare this to mounted network drives, whose names have no way of being > resolved by a recipient. > Huh? I would need an explanation of how that's similar, so this could be > clarified or replaced with something more understandable. > > > Similarly, we can also query a complex resource, such as a blog, to get > parts of the data: > This isn't querying a resource, it simply /is/ a resource. You might say > something like a resource can represent complex information like "all the > blog comments after a particular timestamp written by 'anonymous' ". The > client doesn't need to know just how complex things could be for the server > - it just does a GET. > > > All data resources support GET, but not all behavioral resources do. > What's a "data resource" compared to "behavioral resource" - you might > rephrase that to say that some resources are read-only. > > > While a GET request cannot have side-effects, it can return only parts of > the resource. > > This means that GET is both an atomic read, as well as a query operation. > This is not correct - GET does not return parts of a resource. At least, not > the way you suggest with query terms. > You might want to describe that a GET could be a simple static file or a > complex query operation - both are simply retrievals of data. > > > If two PUT operations occur simultaneously, one of them will win and > determine the final state of the resource. > What happens to the other one? I think that both win. REST gives rise to all > sorts of win-win situations. > > > In the case of two simultaneous DELETE operations, one of them may fail > since the resource > > will have already been deleted. > Nope. It's important to realize that /both/ succeed. There should be no > error code. The protocol does not assume a "locate object or fail, activate > object or fail, invoke method on object" - it just says "make sure the > resource is gone when the request completes". An absent resource is just as > gone as a recently deleted one. > > > The POST operation is very generic and no specific meaning can be attached > to it. > There is meaning associated with it, and unique aspects regarding > repeatability and caching. > > > In general, use POST when only a subset of a resource needs to be modified > and it cannot be accessed > > as its own resource; or when the equivalent of a method call must be > exposed. > Rather than suggesting a generic method calls is appropriate via POST, > perhaps you could narrow it down to modification operations that don't fit > into PUT or DELETE. > > > We can also update individual properties of the entry, because they are > exposed as nested resources. > There isn't a concept of a 'nested resource'. And this example only works > because the server implements it that way, it's not part of REST or HTTP (or > even most Web server frameworks unfortunately). And there is no requirement > that the second resource that is the 'property of the entry' use a URI with > paths - it could just as easily be: > PUT http://myserver/myaddressbook/johndoe?prop=27 > PUT http://myserver/myaddressbook/phones?user=johndoe&type=work > > > > > -----Original Message----- > > From: rest-discuss@yahoogroups.com > > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Steve G. Bjorg > > Sent: Monday, January 01, 2007 9:28 PM > > To: rest-discuss@yahoogroups.com > > Subject: [rest-discuss] Request for feedback: REST for the Rest of Us > > > > I've created a short tutorial on REST and some common > > resource/service patterns. The tutorial is based on my > > experience with designing the API for our DekiWiki application [1]. > > > > I would welcome feedback on the accuracy of the content. The > > tutorial will be used to introduce developers to REST and > > establish a common framework for designing services. > > http://doc.opengarden.org/Articles/REST_for_the_Rest_of_Us > > > > Thanks in advance for taking a look at it. > > > > > > Cheers, > > > > - Steve > > > > [1] http://doc.opengarden.org/DekiWiki_API/Reference/DekiWiki > > > > -------------- > > Steve G. Bjorg > > http://www.mindtouch.com > > http://www.opengarden.org > > > > > > > > > > Yahoo! Groups Links > > > > > > >
Nic James Ferrier wrote: > ... > There's a REST API! > > http://dev.aol.com/aol_video/index.html > > Except this isn't REST. It isn't even HTTP for God's sake. It's just > stupid. There are methods embedded in the URIs. Everything is done > with GET. To more accurately reflect what the API is, the documentation is going to be updated to use the more generic "Web APIs" instead of "REST APIs" (we had a Meeting). I think it would be interesting to see how the non-personalized portions of the video search API would be best represented in a RESTful way. (This portion doesn't require authentication and contains no modifying methods: http://developer.searchvideo.com/RESTAPIDocumentation.php). What's missing? What additional features/documenation would be useful? (Caching?) Should OpenSearch be considered? (The "User Methods" which require both authentication and do modification are more problematic. But they're clearly not REST so I'm not sure it's useful to talk about them on this list in their current form. They actually contain a fair amount of protection against accidental or malicious modification, incorporating a hash signature which covers the user token and the parameters, and which can't be faked without stealing a shared secret.) -- Abstractioneer <http://feeds.feedburner.com/aol/SzHO>John Panzer System Architect http://abstractioneer.org
> I think this is a really good idea and is much needed. The > average developer would probably appreciate and understand > these simple rules and patterns a lot more easily than an > Architectural Style. As long as the author is clear that > these patterns are not defined by REST and that there are > other ways to build RESTful interfaces then I think its fine. I agree - after I sent my comments I thought that maybe the page wasn't a tutorial on REST, so much as how that particular framework does things. But there are portions that are not server framework specific - like the description of a second DELETE request failing. Describing those correctly doesn't hurt.
Hi Steve,
I just read the article. It is quite nice, but isn't there one
problem with it? Statements like:
GET /resource Retrieve the entire resource. Query parameters may
be available to retrieve only parts of the resource.
And:
For example, the following GET operation retrieves the definition of
the EightBallService (Dream SDK/Tutorials/8-Ball):
GET http://myserver/host/blueprints/
MindTouch.Dream.Tutorial.EightBallService
This operation will return the following XML document:
<blueprint>
...
</blueprint>
To obtain the output in JSON, simply add the dream.out.format=jsonp
query parameter to the URI:
http://myserver/host/blueprints/
MindTouch.Dream.Tutorial.EightBallService?dream.out.format=jsonp
({
...
}
My understanding as that URIs in HTTP are opaque, and that each URI
references a *different* resource. Therefore:
http://myserver/host/blueprints/
MindTouch.Dream.Tutorial.EightBallService?dream.out.format=jsonp
Does not retrieve a different format of the resource at:
GET http://myserver/host/blueprints/
MindTouch.Dream.Tutorial.EightBallService
It retrieves a different resource. In other words, my understanding
of the HTTP spec's use of the term "resource," and also Roy's use of
it in his thesis, gives no special meaning to a question mark in a URI.
I do exactly what you are describing, because the user trying to
figure out our information architecture from the URIs may indeed
apply special meaning to that question mark. (In my model for URI
design, the part of the URI left of the question mark defines a
conceptual thing from the user's perspective, but not a resource from
the HTTP or REST perspective.) So I think it makes sense to do what
you're doing from a usability perspecitve, but my feeling is your
article may be inaccurate when it says you can apply query parameters
to a URI to get a subset or different format for a single resource.
Bill
----
Bill Venners
President
Artima, Inc.
http://www.artima.com
On Jan 1, 2007, at 9:27 PM, Steve G. Bjorg wrote:
> I've created a short tutorial on REST and some common resource/service
> patterns. The tutorial is based on my experience with designing the
> API for our DekiWiki application [1].
>
> I would welcome feedback on the accuracy of the content. The tutorial
> will be used to introduce developers to REST and establish a common
> framework for designing services.
> http://doc.opengarden.org/Articles/REST_for_the_Rest_of_Us
>
> Thanks in advance for taking a look at it.
>
>
> Cheers,
>
> - Steve
>
> [1] http://doc.opengarden.org/DekiWiki_API/Reference/DekiWiki
>
> --------------
> Steve G. Bjorg
> http://www.mindtouch.com
> http://www.opengarden.org
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
Mike, Sorry, I think I misunderstood you. I think we're in agreement that both the end user and the intermediary's identities need to be verified to some degree, and the existing APIs do that. The issue of how the user controls the authorization of the intermediary to perform services on behalf of them is complicated. I'm honestly not sure what the video search API does in all the cases but there are other APIs where the user can control the settings (and choose to "remember this choice in the future") explicitly and in a fine-grained way. My main point was that this is a contributing reason why the authentication steps go through a web page presented at least theoretically directly to the user, rather than being purely protocol-based. -John S. Mike Dierken wrote: > I don't understand how this gives control to third parties. > What I read here > http://developer.searchvideo.com/RESTAPIAuthSpec.php indicated that > both and application-id and 'shared secret' are provided by AOL. It > appears that requests must provide a 'signature' which is based in > part on that shared secret - the AOL service must also know this > shared secret to authenticate that signature, so it seems that > authentication is performed by the AOL service. > The current approach involves the end user to provide authorization > every 60 minutes through the process of displaying the appropriate web > pages hosted by AOL - by not providing their username/password the > user essentially does not authorize access. I was just suggesting that > this authorization be more explicit and the end-user could even see a > list of applications and whether they are authorized or not. > > > Note that the third party would still need to at least pass in an > additional user ID recognizable by our system > > in order to deal with per-user data (like my photos or bookmarks or ...). > Yes - that is desired. Some of the documented APIs call this a token: > http://api.searchvideo.com/apiv3 > ?method=truveo.users.getFavoriteVideos > &appid=MY_APPID > * &token=USER_TOKEN* > &start=0 > &results=10 > &showRelatedItems=1 > &sig=[signature]
Bill Venners <bv-svp@...> wrote: > My understanding as that URIs in HTTP are opaque, and that each URI > references a *different* resource. Therefore: > > http://myserver/host/blueprints/ > MindTouch.Dream.Tutorial.EightBallService?dream.out.format=jsonp > > Does not retrieve a different format of the resource at: > > GET http://myserver/host/blueprints/ > MindTouch.Dream.Tutorial.EightBallService > > It retrieves a different resource. In other words, my understanding > of the HTTP spec's use of the term "resource," and also Roy's use of > it in his thesis, gives no special meaning to a question mark in a URI. > > I do exactly what you are describing, because the user trying to > figure out our information architecture from the URIs may indeed > apply special meaning to that question mark. (In my model for URI > design, the part of the URI left of the question mark defines a > conceptual thing from the user's perspective, but not a resource from > the HTTP or REST perspective.) So I think it makes sense to do what > you're doing from a usability perspective, but my feeling is your > article may be inaccurate when it says you can apply query parameters > to a URI to get a subset or different format for a single resource. I'm not sure I'm following your point. Are you saying I could express it better without using the term 'resource'? But what should it be called then? Let's look at the following use case (for Dream): 1) we have a URI that points to an XML document (e.g. http://server/recipes) 2) doing a GET on this URI gives us the document verbatim 3) doing a GET?dream.out.format=jsonp gives us the document in a different representation (sending 'Accept: application/json, text/javascript' should trigger the same conversion) 4) doing a GET?dream.out.select=/list/entry[author='Julia Child'] gives us the XML sub-document that matches the xpath expression I would say that #2-#4 all refer to the same "resource", but chose to represent it in different ways. If I understood you correctly, you are stating the "http://server/recipes" and "http://server/recipies?dream.out.format=jsonp" identify two different resources, because they are different URIs. Is that correct? And if yes, is there a less strict interpretation that would weaken this equality relationship to only apply to the scheme, authority, and path components of the URI (i.e. excluding user name, password, path parameters, query parameters, and fragment)? I think such an equality relationship would be useful and feel intuitive. And would certainly be deserving of a name by which it could be referred to. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Hi Steve, On Jan 3, 2007, at 2:23 PM, Steve G. Bjorg wrote: > Bill Venners <bv-svp@...> wrote: >> My understanding as that URIs in HTTP are opaque, and that each URI >> references a *different* resource. Therefore: >> >> http://myserver/host/blueprints/ >> MindTouch.Dream.Tutorial.EightBallService?dream.out.format=jsonp >> >> Does not retrieve a different format of the resource at: >> >> GET http://myserver/host/blueprints/ >> MindTouch.Dream.Tutorial.EightBallService >> >> It retrieves a different resource. In other words, my understanding >> of the HTTP spec's use of the term "resource," and also Roy's use of >> it in his thesis, gives no special meaning to a question mark in a >> URI. >> >> I do exactly what you are describing, because the user trying to >> figure out our information architecture from the URIs may indeed >> apply special meaning to that question mark. (In my model for URI >> design, the part of the URI left of the question mark defines a >> conceptual thing from the user's perspective, but not a resource from >> the HTTP or REST perspective.) So I think it makes sense to do what >> you're doing from a usability perspective, but my feeling is your >> article may be inaccurate when it says you can apply query parameters >> to a URI to get a subset or different format for a single resource. > > I'm not sure I'm following your point. Are you saying I could express > it better without using the term 'resource'? But what should it be > called then? Yes, since you're attempting to explain REST, and the term "resource" I think means in REST what it means in HTTP, which is very specific. I CC'd the list because I wasn't 100% sure of that, so see if anyone has a different definition of "resource" in the HTTP and/or REST context. > Let's look at the following use case (for Dream): > 1) we have a URI that points to an XML document (e.g. > http://server/recipes) > 2) doing a GET on this URI gives us the document verbatim > 3) doing a GET?dream.out.format=jsonp gives us the document in a > different representation (sending 'Accept: application/json, > text/javascript' should trigger the same conversion) > 4) doing a GET?dream.out.select=/list/entry[author='Julia Child'] > gives us the XML sub-document that matches the xpath expression > > I would say that #2-#4 all refer to the same "resource", but chose to > represent it in different ways. > > If I understood you correctly, you are stating the > "http://server/recipes" and > "http://server/recipies?dream.out.format=jsonp" identify two different > resources, because they are different URIs. Is that correct? And if > yes, is there a less strict interpretation that would weaken this > equality relationship to only apply to the scheme, authority, and path > components of the URI (i.e. excluding user name, password, path > parameters, query parameters, and fragment)? I think such an equality > relationship would be useful and feel intuitive. And would certainly > be deserving of a name by which it could be referred to. > I agree. Our URIs in our new architecture do a similar thing. But I don't call them "resources." I call them whatever they are: article, forum post, etc. I also do the "container" pattern a lot as you named it. You could call these things (for lack of a better word) resources if you want, but it seems like overloading the term in an article trying to explain REST. You might try coming up with a better name than "thing" that isn't "resource." On the other hand, I'm not really sure it's an article about REST if you're going in that direction. It may be more an article about user- friendly web information architecture and URI design. At least that's how I think of those patterns you described, that the URI to the left of the question mark defines one "thing" in the *user's* mental model of your website. The stuff to the right of the question mark the user can think of as stuff that customizes their view of the "thing." But those different URIs giving views of the same "thing" are not representations of resources in the HTTP terminology, which is where I believe REST lives. In that worldview each different URI is a different "resource" that may have multiple "representations." For example: http://www.artima.com/articles Is a view of the collection of articles that shows the most recently published 15 articles. http://www.artima.com/articles?p=7 Shows page seven of the same list, the collection of articles shown in most-recently-published order. Here the "thing" in the user's mind is the collection of articles. But from the HTTP perspective, from my reading of the spec, these are two distinct resources each of which could have multiple representations, not two representations of the same resource. Anyway, my feefback is simply to be very clear on the terms in your article, and since it is an article attempting to explain REST, try to use "resource" and "representation" as it is used in the HTTP spec and Roy's paper. (And I welcome clarification from others if I'm not understanding these terms correctly.) Bill > > - Steve > > -------------- > Steve G. Bjorg > http://www.mindtouch.com > http://www.opengarden.org > > > > > Yahoo! Groups Links > > > >
S. Mike Dierken wrote: > > I think this is a really good idea and is much needed. The > > average developer would probably appreciate and understand > > these simple rules and patterns a lot more easily than an > > Architectural Style. As long as the author is clear that > > these patterns are not defined by REST and that there are > > other ways to build RESTful interfaces then I think its fine. > I agree - after I sent my comments I thought that maybe the page wasn't a > tutorial on REST, so much as how that particular framework does things. But > there are portions that are not server framework specific - like the > description of a second DELETE request failing. Describing those correctly > doesn't hurt. So DELETE just indicates the intent to transition to a deleted state. Thus applying DELETE to a deleted resource is successful because the outcome is a deleted resource. Correct? Just to make sure, is that the generally accepted interpretation? -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Bill Venners wrote: > My understanding as that URIs in HTTP are opaque, and that each URI > references a *different* resource. Therefore: > > http://myserver/host/blueprints/ <http://myserver/host/blueprints/> > MindTouch.Dream.Tutorial.EightBallService?dream.out.format=jsonp > > Does not retrieve a different format of the resource at: > > GET http://myserver/host/blueprints/ <http://myserver/host/blueprints/> > MindTouch.Dream.Tutorial.EightBallService > > It retrieves a different resource. Bill, the same resource is the accessed in both cases as the resource identifier is the same - namely the pieces up to the query string (see para two in 3.2.2 of the HTTP spec). The only criticism I'd have of the URL design here is that "dream.out.format" is long to type. cheers Bill
Bill de hOra wrote: > Bill, the same resource is the accessed in both cases as the resource > identifier is the same - namely the pieces up to the query string (see > para two in 3.2.2 of the HTTP spec). Wonderful! All is good then. > The only criticism I'd have of the > URL design here is that "dream.out.format" is long to type. I got the idea to use namespaces on query parameters from OpenID. I think it's a great idea. Using something like 'output' is just calling for trouble. At some point someone will want to use a query parameter that is already used by the framework and then the mess begins. 'dream.format' couldn't be used either, because the framework also supports an inbound data transform (so you can post/put JSON from the browser). That said, the next version of Dream (Denim) will probably use HTTP headers instead. So that 'Content-Type: text/javascript' gets converted to 'Content-Type: application/xml' when appropriate and conversely with 'Accept: text/javascript' the content will be transformed to the target representation. This way, there is no need to play with URI query parameters and it also leverages nicely the capabilities that already exist in the protocol. However, 'dream.out.format' makes for a nicer better demo in the browser! :) -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
On Jan 3, 2007, at 4:22 PM, Steve G. Bjorg wrote: > Bill de hOra wrote: > > Bill, the same resource is the accessed in both cases as the > resource > > identifier is the same - namely the pieces up to the query string > (see > > para two in 3.2.2 of the HTTP spec). > > Wonderful! All is good then. That was the case back in 1994, when I wrote it. It has not been true for a long time since. It is one of those things that became untrue once people realized that the interface had no reason to respect such a distinction (and rightly so -- it introduces coupling where none is needed) and so the distinction was removed server-side to reflect the migration of resources to new implementations. The resource in HTTP is the mapping from the entire identifier (including scheme, authority, path, and query) to a set of values. For two resources to be the same, they must map to the same set of values for all time. There is no way to determine that by inspecting the identifiers, aside from scheme-defined equivalence. ....Roy
Hi Bill, On Jan 3, 2007, at 3:26 PM, Bill de hOra wrote: > Bill Venners wrote: > >> My understanding as that URIs in HTTP are opaque, and that each URI >> references a *different* resource. Therefore: >> >> http://myserver/host/blueprints/ <http://myserver/host/blueprints/> >> MindTouch.Dream.Tutorial.EightBallService?dream.out.format=jsonp >> >> Does not retrieve a different format of the resource at: >> >> GET http://myserver/host/blueprints/ <http://myserver/host/ >> blueprints/> >> MindTouch.Dream.Tutorial.EightBallService >> >> It retrieves a different resource. > > Bill, the same resource is the accessed in both cases as the resource > identifier is the same - namely the pieces up to the query string (see > para two in 3.2.2 of the HTTP spec). The only criticism I'd have of > the > URL design here is that "dream.out.format" is long to type. > Thanks very much for that pointer. You're right about 3.2.2, and that challenges my understanding of HTTP terminology. So whether I GET: http://www.artima.com/articles or http://www.artima.com/articles?p=7 I get back a representation of the *same* resource (which from the user's perspective is a collection of articles). Is it correct to call these different "representations" of the same resource? It says in the spec that a representation is "An entity included with a response that is subject to content negotiation, as described in section 12. There may exist multiple representations associated with a particular response status." Tacking on a "?p=7" is not content negotiation as defined by the spec. So I think all I can say is I get back *a* representation of the same "collection of articles" resource from any URI that is "http://www.artima.com/articles" to the left of the question mark. I looked again at 3.11 in the spec, and I see they left sufficient wiggle room for entity tags and caching. Given that the spec says entity tags are per-resources, my understanding to date had been that I could have a unique set of entity tags per URI. But since the above two distinct URIs really refer to the same resource, their entity tags should really taken from the same set. But the spec says that "A given entity tag value MAY be used for entities obtained by requests on different URIs. The use of the same entity tag value in conjunction with entities obtained by requests on different URIs does not imply the equivalence of those entities." This says a cache can only rely on identical entity tags meaning the entity is identical if it is from the same URI. So in practice it worked like I had imagined: I can have a unique set of entity tags per URI, and If-None- Match requests from caches will really work per-URI, not per-resource. Does that sound correct? Bill > cheers > Bill > > > > Yahoo! Groups Links > > > >
Roy T. Fielding wrote: > > The resource in HTTP is the mapping from the entire identifier > (including scheme, authority, path, and query) to a set of values. I'm reluctant to quibble with an editor, but I was referring to the means of identification as laid out, and not the substance of the resource. cheers Bill
On Jan 3, 2007, at 5:13 PM, Bill de hOra wrote: > Roy T. Fielding wrote: > > > > The resource in HTTP is the mapping from the entire identifier > > (including scheme, authority, path, and query) to a set of values. > > I'm reluctant to quibble with an editor, but I was referring to the > means of identification as laid out, and not the substance of the > resource. What is the difference? ;-) ....Roy
Benjamin, ---------------------- Objection: "Clients can't be programmed to complete forms" Counter-example: This machine automatically fills out a form based on a vocabulary every day: Your web browser, when it pre-fills form fields for you. Browsers know that user name and password fields on HTML forms usually have consistent names. And Google Toolbar saves some common personal info about you and auto-populates your name, address, etc, based on a common vocabulary. You just program a client to map information it has, to fields identified by name -- a vocabulary. ---------------------- Misconception: "Forms are schemas" Correction: Forms are queries made by the server to the client. "Give me the data you have named 'author-email". Schemas describe relations among objects. They describe models, not messages. ---------------------- Starry-eyed delusion: A Standards process can capture the union of all the semantics any server could manifest. Corollary: Exchanging these documents is "scalable" Real world experience: RosettaNet PIP 3A4. Thoroughly specified schemas describing almost every possible Purchase Order Request and Response. Guess how long it takes one large computer manufacturer to shake out enough impedance mismatches to get going with each new trading partner? 30-60 days, and that is a dedicated team of experts exchanging test messages, "validating" them, discovering and resolving the semantic mismatches in even the "valid" documents. So that's one or two months per partner. How long does it take to ramp on just 100 partners? Years. Not web scale, is it? Exchanging pre-defined special purpose documents, as APP does, satisfies REST's definition, but it's not webby. Forms have proven their scalability on the web, because they promote loose coupling. They offer hope that web services could be deployed at web scale. Hugh On 1/3/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > On Tue, 2007-01-02 at 12:31 -0600, Hugh Winkler wrote: > > On 12/31/06, Benjamin Carlyle <benjamincarlyle@...> wrote: > > > On Sat, 2006-12-30 at 19:08 -0600, Hugh Winkler wrote: > > > > On 12/30/06, Benjamin Carlyle <benjamincarlyle@...> wrote: > > > > > I'm not sure who you think will interpret this form on the client > > > > side. > > > > > You see, the thing is that the atom document format is already a > > > > form. A > > > > > client has already received the form by being programmed in a > > > > particular > > > > > way, and is now submitting the form. It knows what information > > > > should be > > > > > placed in each of the named fields. It knows how to construct the > > > > end > > > > > document. > > > > Ah, but it is a standard form for all servers -- not for my particular > > > > server app. And to know what kind of form to submit, you read a spec > > > > -- you did not GET the description from the server. It's all baked in > > > > at design time. > > > Exactly. It has been agreed ahead of time. > > That's the problem --- "ahead of time" rather than dynamically at run time. > > What you have to understand here, is that you are trying to replace > human agreement with machine agreement. Humans are good at negotiating > standards like atom. It's a hard problem, but standards get nailed out. > Information producers agree to transform their internal models into the > standard format. Information consumers agree that the standard format is > a suitable source of data for their internal models. Internal models > often have to bend as part of this process, and eventually realign > around a competent standard to be more simliar than they are different. > > Machines are not good at negotiation. You give a machine a form to fill > out, and the machine already needs to know how to fill out the form > before it starts. The form says "title", "summary", "content". The > machine already needs an internal model that has those elements. The > form says "don't give me the summary", the client could have code > written to say "only send the subset of the standard which the server > says it can accept". That's as good as you can do. > > You can't give a client an arbitrary form that isn't a simple subset and > expect it to know what to do. If the client software wasn't written to > know that only one author might be supported on the other side, it can't > choose which author it should supply any better than the server side. > You are lucky if it can interpret the server's instructions not to > supply more than one at all. > > You can't arbitrarily place restrictions on the client as to how it > should fill out its content. The only practical way to do it is to write > a program that the client must run over a standard atom data model in > order to fit the server's point of view. And guess what: That's the same > program you would run on the server side if the client just submitted > the atom document in the first place. > > Clients can't deal with unexpected server demands. Server demands are > only expected if they are negotiated between humans, which is to say > they are part of the atom specification. You can't do any better than > what is in the standard by supplying a form. > > > >Now, I can just send my POST > > > request and know that the server understands a useful subset of what I > > > am sending them. > > Not sure how you would know it understands a useful subset. You won't > > have the slightest idea what parts of your document it understands, or > > doesn't. doesn't handle multiple authors? Can only accept text/plain > > for <title>? > > I know it understands because we agreed on the content through the atom > standardisation process. We agreed that I would send this much and the > server would understand that much. Whether understand means "completely > model" is up to the server. It is free to cut the xhtml out of its > title. It is even free to use the xhtml content as text/plain. That's > its perrogative. What is not in its perrogative is to reject a > well-formed and valid atom document. If it intends to do that it should > not claim to understand atom in the first place. > > > > What you are suggesting is that I first need to obtain a schema document > > > (which you are calling a form) > > A schema document is not a form. Maybe we should stay away from XML > > for a moment. Think HTML form... which is a little "program" telling > > the UA how to serialize a submission. > > You are describing the set of valid documents I can submit to you. You > can call it a form if you like, but it is more correctly a schema. > > If you are no longer talking about a schema, and are now talking about a > program to transform my atom content into your sub-atom content... then > why aren't you running that program on the server side? > > > > to see if the server is actually > > > understanding only a subset of the atom vocabulary. > > Not a subset... could be a superset. > > Now you are talking about the client supplying more elements than it > knows how to supply. You are presumably talking about extensions to the > standard, but extensions are standards too. Extensions require human > agreement between client and server in order to be understood. > > > > Then I need to > > > customize my content to conform to this subset. As a machine, I don't > > > have any good way of doing that. > > See, you have this problem anyway. If you are sending a server stuff > > it doesn't understand, or not enough stuff, your application will > > fail. > > But I have already agreed through the standardisation process with the > server that it will understand my content. My application will only fail > if the server fails to implement the specification. > > > With forms, at least, your application knows "Hey, I don't know how to > > fill out this required field". Same as a human would using an HTML > > form. Your client can report that to a human for correction. > > It only knows if I write code. I only write code if I have communicated > with the guy who wrote the server about what is permissible. I have > already done this. We called that conversation the atom standardisation > process. > > Why do you think a human in the loop can do anything about the failure > to communicate? Are they going to hack on their client application every > time a server says it only understands an unexpectedly-small subset of > atom or demands an extension element be supplied? No... if there is a > human in the loop she will write an email to the server's administrator > to inform him of his bug in failing to implement the specification. It > is not the client's problem. It is the server's problem. > > > Atom (and any application protocol based on exchanging known document > > types) has to trade off between exhaustively specifying application > > behavior and exhaustively specifying failure handling. > > And exhaustively supporting forwards-compatability for extensions, and > exhaustively trading server and client-side complexity for protocol > features. > > > >If I come up to a server that has > > > support for an unexpectedly small subset of atom, I then have to > > > customize my content in an unexpected way. > > It is better for you to do the customizing. TTake the example of a > > server that simply cannot honor text/html or application/xhtml+xml in > > the title field. It can only handle text/plain. Atom protocol says > > nothing at the moment about this situation, except that the server can > > change your POSted data as it needs to. So presently my server either > > a) rejects your submission or b) stores it as text/plain. Better > > would be for your client to receive an Xform with a constraint > > specifiying "text/plain" only -- then, if the user had any imprtant > > rich content they wanted to put in the title, they can at least try to > > compensate. > > No no no.... the client doesn't know how to customise the content. As > the author of the client I relied on the atom specification that says I > can supply a content element and I did. Now your server is telling me it > doesn't understand it and wants a summary element instead? My client is > not written to deal with that. The server can deal with its own > shortcomings, thankyou. If it doesn't understand the protocol it should > stop speaking it. > > If the server doesn't understand xhtml in the title, then tough! It > knows that it must be expected to deal with xhtml in the title because > we agreed through the standardisation process that it should be capable. > How it provides that support as the atom spec rightly points out is up > to the server. Maybe it will strip out anything in angle-brackets before > storing the value into its internal title variable. Maybe it will just > use the xhtml content verbatim. That's the server's problem. It isn't > permitted to reject my submission. How is my client supposed to know > that what it needs to do in the face of this dumb server? We already > agreed that xhtml was fine, and now this server wants to go back on > that? Move complexity to me, will you? No thanks. I'll find another > server to talk to. Maybe one that understands xhtml in the title. > > > >That is to say, I way that > > > noone programmed me to customise my content in :) > > Well, you would have programmed it from this pov, so you would have > > handled these exceptional situations. > > I don't write client software to deal with broken servers that don't > implment the spec. It is up to the server to deal with the problem if it > can't translate my request precisely. > > > > The server, on the other hand, is in a good position to customise the > > > content. It knows which subset of atom it understands, and it knows what > > > atom is generally. It knows multiple authors might be required, so is in > > > a position to either model those multiple authors or use an algorithm to > > > select an author for its model from the available list. > > See above. Yes, you are describing the undesirable behavior the > > current APP forces you into. > > Quite the opposite. Your suggestion doesn't hold water. The spec > reflects decades of experience in developing protocols that work and can > evolve succesfully for decades to come. Do you think you can do better > without having written software on more than one side of the > client/server fence? > > > > > What I proposed is that the form delivered by a server have just the > > > > elements that make sense for the server. My server might not know what > > > > to do with a <source> element. Using the standard "form", my server > > > > has to have handling in place if you submit a <source> element, and it > > > > has to describe to you the problem if you do submit <source> and my > > > > server rejects it. > > > All atom elements make sense to the server, even if they don't fit the > > > server's internal model. > > > The server implements atom, after all. > > not so -- the client may submit extgension elements the server isn't aware of. > > And the server is required to ignore them and the client is required to > accept that old servers will ignore them. That's the way extensible > protocols work. If the extension is good it will be supported. If it is > not it will be ignored, sidelined, and eventually forgotten. > > > > What you > > > are suggesting is adding an extra message exchange to move the > > > complexity of fitting atom into the server's model back to the client. > > > Neither side is going to be great at solving a model mismatch problem, > > > but the client is likely to be downright incapable. > > As above: The client is in the best position to take corrective > > action, so as to best fulfill its intent. > > Show me the code. Your solution requires me to write client-side code > every time a new kind of dumb server is placed on the internet. That > doesn't work. That doesn't scale. Machines can't negotiate, only humans > can... and we already have. If you have anything more to add to that > conversation you had better to do it. Don't try to hold a separate > conversation with my client software. It doesn't know how to hold that > conversation. > > > Forms made the web adaptive. Go to any airline reservation web site. > > They're all the same, but different too. They mostly do the same > > things, but Orbitz offers packages with hotels and cars, while Delta > > offers connections with partner airlines, and Priceline won't let you > > see the itinerary. There's some shared vocabulary among all those > > sites but varying behavior. If the airlines standardize their > > vocabulary for forms, you could program a client to interact, > > adaptively, with all those sites. But you could not constrain > > Priceline's app to squeeze into the same behavior model as Delta's. > > And you shouldn't -- you should encourage diversity among web apps, as > > has been successful on the web to date. > > The evolvability of HTML and HTTP are what have been successful on the > web to date. Atom follows the evolvability and agreement model of its > predecessors. Do you really want your web server to have to retrieve a > form from every browser that requests a html page before it can be > returned? When that form says "I don't understand paragraph markers", > what will your web server do to make the content fit? > > Benjamin. > >
> > That said, the next version of Dream (Denim) will probably > use HTTP headers instead. So that 'Content-Type: > text/javascript' gets converted to 'Content-Type: > application/xml' when appropriate and conversely with > 'Accept: text/javascript' the content will be transformed to > the target representation. This way, there is no need to > play with URI query parameters and it also leverages nicely > the capabilities that already exist in the protocol. > However, 'dream.out.format' makes for a nicer better demo in > the browser! :) > I once built a framework like this and used the names from HTTP with a prefix, like "?do:accept=text/xml" That way I could point to an existing definition of what 'accept' means as well as the allowed values. With "dream.out.format" it's not clear that a MIME type is to be supplied.
On 1/2/07, Mike Schinkel <mikeschinkel@...> wrote: > Walden Mathews wrote: > > Maintaining multiple URIs for the same resource is not > > best practice. For the same reason you want uniformity in > > methods, you want one preferred URI for identifying a > > resource. The server should redirect requests for other > > equivalent URIs to the preferred one. Althought that's not > > quite what you are asking, I think it is the answer. > > I'm curious why you make the above statement. Is there a W3C finding that documents this, or some other paper? I've been doing a significant amount of research on URLs and URI best practices in the past 3+ months and have found nothing like this (though I could have missed it.) http://www.w3.org/TR/webarch/#avoid-uri-aliases "URI owner SHOULD NOT associate arbitrarily different URIs with the same resource." -joe -- Joe Gregorio http://bitworking.org
Joe Gregorio wrote: > > I'm curious why you make the above statement. Is there a > W3C finding > > that documents this, or some other paper? I've been doing a > > significant amount of research on URLs and URI best > practices in the > > past 3+ months and have found nothing like this (though I > could have > > missed it.) > > http://www.w3.org/TR/webarch/#avoid-uri-aliases > > "URI owner SHOULD NOT associate arbitrarily different URIs > with the same resource." Thanks for the link. *Sigh* I think that finding optimizes one aspect of the web at the expense of others. It certainly places limits on what can be done from a URL usability perspective. While I do understand the rationale behind it, I plan to advocate that the finding needs be reconsidered in the future. However I know that the current state of web specifications makes that impractical, I want to look at root cause of why it's not practical and see what we can do to address it. The research I'm doing has been heading me down this path and I will be writing about it at length in the near future. BTW, I'd argue that a worst problem exists with respect to URL aliases from the perspective of web apps being sloppy. Web apps that use query parameters in different orders, that use sessionIDs and userID as query parameters where it doesn't change the content, and so on create lots of URI aliases unintentionally. Ironically, nobody says much about that. That's another problem that I also think really should be addressed at the same time. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Roy T. Fielding wrote: > The resource in HTTP is the mapping from the entire identifier > (including scheme, authority, path, and query) to a set of values. > For two resources to be the same, they must map to the same set > of values for all time. There is no way to determine that by > inspecting the identifiers, aside from scheme-defined equivalence. I think I get what you are saying, but I want to be sure. Basically you are saying that each of the following URLs point to a DIFFERENT resource as far as REST is concerned, correct: http://www.foo.com/bar http://www.foo.com/bar/ http://www.foo.com/bar/index.php http://www.foo.com/bar?a=1&b=2 http://www.foo.com/bar?b=2&a=1 http://www.foo.com/bar/?a=1&b=2 http://www.foo.com/bar/?b=2&a=1 http://www.foo.com/bar/index.php?a=1&b=2 http://www.foo.com/bar/index.php?b=2&a=1 -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Bill, Scott, yepp, APP solves all of these in a standardized way. IIRC there is actually already a 'standardized' means to find the editing-resource of another resource by pure HTTP mechanisms: The Link header together with the 'source' relation. That is, if the server tells you about a related resource via a Link of type 'source' all editing should be applied to that one. I recall Roy having mentioned that on one of the Atom lists. Though I have no idea about the actual standardization status of either the Link header nor the source relation... Jan On 08.08.2006, at 18:11, Bill de hra wrote: > Scott Chapman wrote: > > > > > > If hitting a URL with GET is supposed to give you a view of the > resource > > and > > it's not a form by default, then how to do you get a <form...> > view of > > it for > > editing? > > > > The same applies to getting a blank form for creating a new > resource. > > > > How are people handling these situations in their RESTful designs? > > The most common pattern today is to provide the form at another URL, > which you typically embed as a link in the representation. Blog > APIs do > something similar by providing a URL you post into. Looking further > out, > atom protocol will standardize a means for getting at what it calls > the > "edit-uri" for a resource. > > It's a good question, illuminating even. > > cheers > Bill > > >
[ Attachment content not displayed ]
On 1/3/07, Hugh Winkler <hughw@...> wrote: > Starry-eyed delusion: A Standards process can capture the union of all > the semantics any server could manifest. > Corollary: Exchanging these documents is "scalable" > > Real world experience: RosettaNet PIP 3A4. Thoroughly specified > schemas describing almost every possible Purchase Order Request and > Response. Guess how long it takes one large computer manufacturer to > shake out enough impedance mismatches to get going with each new > trading partner? 30-60 days, and that is a dedicated team of experts > exchanging test messages, "validating" them, discovering and resolving > the semantic mismatches in even the "valid" documents. So that's one > or two months per partner. How long does it take to ramp on just 100 > partners? Years. Not web scale, is it? That fits with my experience. Previous ecommerce standards, ANSI X12 EDI and EDIFACT, were even worse: took months. Some of the same people have now created UBL: a 4th or 5th generation (X12, EDIFACT, RosettaNet, UBL, depending on whether you put ebXML into that geneology). We'll see how quickly new trading partners can get going using that. I think it's less prescriptive than RosettaNet, which tried to achieve minimal configuration.
On Wed, 2007-01-03 at 22:23 +0000, Steve G. Bjorg wrote: > Let's look at the following use case (for Dream): > 1) we have a URI that points to an XML document (e.g. > http://server/recipes) > 2) doing a GET on this URI gives us the document verbatim > 3) doing a GET?dream.out.format=jsonp gives us the document in a > different representation (sending 'Accept: application/json, > text/javascript' should trigger the same conversion) > 4) doing a GET?dream.out.select=/list/entry[author='Julia Child'] > gives us the XML sub-document that matches the xpath expression > > I would say that #2-#4 all refer to the same "resource", but chose to > represent it in different ways. Here is the vocabulary I use: #2-#4 all refer to different _resources_ becuase they have diffent uris. >From the client perspective no relationship can be drawn between these resources without additional information, ie some sort of hyperlink. It is perhaps peculiar that resources don't always exist within the server. Sometimes they are implemented as objects in their own right, but other times a single object might offer many resources or vice versa. Resources themselves exist between clients and servers as a shared interface concept. These different resources all _demarcate_ related _application state_ on the server side. Each resource selects a piece of state (ie information) in the server that might be part of a single object, might be a whole object, or might be parts of several different objects. If we were talking about databases a resource might demarcate a subset of rows across multiple tables. When we GET a resource, we retrieve a _representation_ of that resource. This representation is a document that contains the information demarcated by the resource. The same resource might choose to return its state in several formats, providing several different representations of its state. Different representations may offer different levels of semantic fidelity. For example, atom data could be returned in a simple csv or text file... but then the information couldn't be used in a news reader client application. Benjamin.
On Wed, 2007-01-03 at 20:06 -0600, Hugh Winkler wrote: > Objection: "Clients can't be programmed to complete forms" > Counter-example: This machine automatically fills out a form based on > a vocabulary every day: Your web browser, when it pre-fills form > fields for you. Browsers know that user name and password fields on > HTML forms usually have consistent names. And Google Toolbar saves > some common personal info about you and auto-populates your name, > address, etc, based on a common vocabulary. You just program a client > to map information it has, to fields identified by name -- a > vocabulary. > Misconception: "Forms are schemas" > Correction: Forms are queries made by the server to the client. "Give > me the data you have named 'author-email". Schemas describe relations > among objects. They describe models, not messages. The actual query made by the server to the client is: Put a text box next to the human-readble text "Please enter your email address". Return what the user enters into that box as the "author-email" element of an xml document. Oh, and by the way. Don't submit it back until that xml document maches this xml schema I supply. The kind of client atompub is designed to support already has a text box. It's called "author". Now your form says "give me author-email"... but there is no author-email box in the application. The application has already been written and its data model doesn't match yours. Now what would you like the client to do about it? If you just want to supply a form to a user, you can do that. You don't need atompub for that. You don't need any standardisation past HTML forms or XForms. Atompub is what you are looking for when you want to write a clever and easy to use thick-client atom authoring client. Supplying a form to this thick client isn't going to help. You say you want an author-email box next to the "Please enter your email address" text? Hey! my interface has already been designed. Where do you expect me to put that text box? If you want to design your own interface, just supply the form to a web browser and be done with it. For me, I have my own user interface. The purpose of atompub is to allow that client with its complete user interface to submit articles to a server that has also been written. > Starry-eyed delusion: A Standards process can capture the union of all > the semantics any server could manifest. > Corollary: Exchanging these documents is "scalable" > > Real world experience: RosettaNet PIP 3A4. Thoroughly specified > schemas describing almost every possible Purchase Order Request and > Response. Guess how long it takes one large computer manufacturer to > shake out enough impedance mismatches to get going with each new > trading partner? 30-60 days, and that is a dedicated team of experts > exchanging test messages, "validating" them, discovering and resolving > the semantic mismatches in even the "valid" documents. So that's one > or two months per partner. How long does it take to ramp on just 100 > partners? Years. Not web scale, is it? Bad standards exist, therefore standards are bad. Nice logic. Next I'm sure you'll tell me about how bad HTML is and why it will never be deployed or understood by anyone. We should require that web browsers provide a form for the server to fill out that informs the server of the client's capabilities. That will help. When the client says it doesn't understand <h3>, the server will know exactly what to do. It'll just translate those h3 elements into h2 elements. Of course the programmer who wrote the server will have anticipated this and written the capability into it. Of course the first round of standardisation is a shambles. It hardly ever works until the authors of clients and servers have some implementation experience and have developed some mutual trust in their mutual interests. That is why standards like atom have defined mechanisms for evolvability, and why atom has been so long in development despite person-decades of experience in related precursor standards. Document standards represent a conversation between document producers and consumers about their needs. It is a compromise between features and complexity in problem domains that are rarely mapped out well enough to have a sane conversation about. Trying to carry on a conversation outside of the process however between your server and my client isn't going to get anywhere. Machines can't communicate unless their programmers already can. Unless the author of the client software and the author of the server software agree out of band, communication cannot happen. > Exchanging pre-defined special purpose documents, as APP does, > satisfies REST's definition, but it's not webby. Forms have proven > their scalability on the web, because they promote loose coupling. > They offer hope that web services could be deployed at web scale. What you are saying is that server-provided user interfaces have proven useful on the web because they don't require prior agreement with the client software about what the user interface will look like. That is all well and good, but doesn't translate into good practice for machine to machine communications. When a machine has an atom document it wants to submit to another machine, it is too late to start messing with the content. The client machine doesn't know how. It has to rely on out of band agreement with the server to what is acceptable. Benjamin.
On 1/4/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > On Wed, 2007-01-03 at 20:06 -0600, Hugh Winkler wrote: > > Real world experience: RosettaNet PIP 3A4. Thoroughly specified > > schemas describing almost every possible Purchase Order Request and > > Response. Guess how long it takes one large computer manufacturer to > > shake out enough impedance mismatches to get going with each new > > trading partner? 30-60 days, and that is a dedicated team of experts > > exchanging test messages, "validating" them, discovering and resolving > > the semantic mismatches in even the "valid" documents. > Bad standards exist, therefore standards are bad. Nice logic. I don't think that's a valid conclusion. RosettaNet is not a bad standard. As I wrote in another post, it is the third generation of ecommerce standards that each tried to learn from and resolve the problems of the previous generation. Each was better than its predecessor. So by the Microsoft rule, RosettaNet should have been pretty good. It's just a difficult problem.
On 1/4/07, Bob Haugen <bob.haugen@...> wrote: > On 1/4/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > > On Wed, 2007-01-03 at 20:06 -0600, Hugh Winkler wrote: > > > Real world experience: RosettaNet PIP 3A4. Thoroughly specified > > > schemas describing almost every possible Purchase Order Request and > > > Response. Guess how long it takes one large computer manufacturer to > > > shake out enough impedance mismatches to get going with each new > > > trading partner? 30-60 days, and that is a dedicated team of experts > > > exchanging test messages, "validating" them, discovering and resolving > > > the semantic mismatches in even the "valid" documents. > > > Bad standards exist, therefore standards are bad. Nice logic. > > I don't think that's a valid conclusion. RosettaNet is not a bad > standard. As I wrote in another post, it is the third generation of > ecommerce standards that each tried to learn from and resolve the > problems of the previous generation. Each was better than its > predecessor. So by the Microsoft rule, RosettaNet should have been > pretty good. > > It's just a difficult problem. > > Exactly. RosettaNet is a great standard... the apotheosis of this style of exchanging full documents. My point is that style is not scalable, and that APP is using the same approach, on a smaller scale.
On 1/4/07, Bob Haugen <bob.haugen@...> wrote: > On 1/3/07, Hugh Winkler <hughw@...> wrote: > > Starry-eyed delusion: A Standards process can capture the union of all > > the semantics any server could manifest. > > Corollary: Exchanging these documents is "scalable" > > > > Real world experience: RosettaNet PIP 3A4. Thoroughly specified > > schemas describing almost every possible Purchase Order Request and > > Response. Guess how long it takes one large computer manufacturer to > > shake out enough impedance mismatches to get going with each new > > trading partner? 30-60 days, and that is a dedicated team of experts > > exchanging test messages, "validating" them, discovering and resolving > > the semantic mismatches in even the "valid" documents. So that's one > > or two months per partner. How long does it take to ramp on just 100 > > partners? Years. Not web scale, is it? > > That fits with my experience. Previous ecommerce standards, ANSI X12 > EDI and EDIFACT, were even worse: took months. > > Some of the same people have now created UBL: a 4th or 5th generation > (X12, EDIFACT, RosettaNet, UBL, depending on whether you put ebXML > into that geneology). We'll see how quickly new trading partners can > get going using that. I think it's less prescriptive than RosettaNet, > which tried to achieve minimal configuration. > > It would be good to experiment with a forms style using UBL. I think you could construct forms using UBL components, so that Toshiba could have their own PO and Hitachi could have another. Purchasing software would then be able to GET the forms, and complete them, POSTing the right information in the PO, because the clients know the definitions in UBL. Hugh
On 1/4/07, Mark Baker <distobj@...> wrote: > On 1/4/07, Hugh Winkler <hughw@...> wrote: > > It would be good to experiment with a forms style using UBL. > > http://sourceforge.net/projects/xforms4ubl/ > Excellent. So all we need to do is define UBL components for the parts of an atom:entry that don't already have definitions, e.g. ubl:atom-content... and you can publish to your blog using ubl-aware clientware, plus you can order a hard disk too! Hugh
On 1/4/07, Hugh Winkler <hughw@...> wrote: > It would be good to experiment with a forms style using UBL. http://sourceforge.net/projects/xforms4ubl/ Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Roy said: > *sigh* That about sums it up. ;-) > Just ignore the definition of idempotent in RFC 2616. Anything > specified in HTTP that defines how the server shall implement the > semantics of an interface method is wrong, by definition. What > matters is the effect on the interface as expected by the client, > not what actually happens on the server to implement that effect. and > We have to keep dancing around that bush because > terminology is a committee-driven process. Everyone has an opinion > and so no opinion is spec'd consistently. Reminds me of a post I read the other day on "Why Specs Matter", with an amusing binary taxonomy of developers, which lends some insight into why this might be the case. Found here: http://diveintomark.org/archives/2004/08/16/specs Enjoy! Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
On 1/4/07, Andrzej Jan Taramina <andrzej@...> wrote: > Reminds me of a post I read the other day on "Why Specs Matter", with an > amusing binary taxonomy of developers, which lends some insight into why this > might be the case. > > Found here: > > http://diveintomark.org/archives/2004/08/16/specs More recently, from the same blog: http://diveintomark.org/archives/2006/12/07/rest-for-toddlers :-) Alan Dean
On Jan 3, 2007, at 11:48 PM, Mike Schinkel wrote: > Roy T. Fielding wrote: > > The resource in HTTP is the mapping from the entire identifier > > (including scheme, authority, path, and query) to a set of values. > > For two resources to be the same, they must map to the same set > > of values for all time. There is no way to determine that by > > inspecting the identifiers, aside from scheme-defined equivalence. > > I think I get what you are saying, but I want to be sure. Basically > you are > saying that each of the following URLs point to a DIFFERENT > resource as far > as REST is concerned, correct: No. Why would I have said "For two resources to be the same, ..." if I thought that changing a URI always resulted in different resources? They might be different resources, the client generally won't be able to figure that out, so the only safe assumption is that they are different resources until stated otherwise by the server. This is Web Architecture. Resources are an abstraction -- a source of goodness as perceived by the person who linked to that resource that is in the form of a value-giver over time. There are no resources on the Web -- only senders and receivers of representations that have the effect of evaluating a resource mapping at invocation time, thereby becoming "the resource" as we perceive it over time even though we all know it is just a finite data server at any single point in time. The ultimate goal is to choose the one true identifier that will always map to the intended resource, such that the link author (and the millions of people who subsequently copy or bookmark that link) never have to change the link. Of course, we rarely achieve the ultimate goal the first time, and sometimes there are many paths to the ultimate goal that get explored along the way, and sometimes people move the goal posts. If there are many URIs for a given resource, the best implementation is for all of the other URIs to redirect to the one URI that is deemed to be "best" for the resource's unique semantics. The reason for that is not REST or Web Architecture (though both are specifically designed to enable it): the reason is network economics as expressed by power laws, Metcalfe's law, PageRank, and a hundred other restatements of the factors that place value on social networks. REST works because it makes absolutely no attempt to understand what the resource is, how it might be implemented, or the scope of how it will change over time. It eliminates the semantic burden of understanding by focusing only on the interface as a means of hiding knowledge from the other side, yet communicating all that needs to be said in the same way that two people communicate -- tossing representations across the gap with a relatively small number of pitch inflections to indicate what is expected in return. In short, REST doesn't care what the resource is or how many URIs identify the same resource, because to care would require understanding that would lead to coupling which is more dangerous than inefficiency. Such communication theory is a lot more abstract than REST, REST is a lot more abstract than Web Architecture (URI, HTTP, HTML, ...), and Web Architecture is somewhat more abstract than any current implementation of that architecture (Apache httpd, Mozilla Firefox, etc.). And yet they all need to influence each other, in various ways, when we attempt to design changes to a living system. ....Roy
Roy T. Fielding wrote: > > Roy T. Fielding wrote: > > > The resource in HTTP is the mapping from the entire identifier > > > (including scheme, authority, path, and query) to a set of values. > > > For two resources to be the same, they must map to the > same set of > > > values for all time. There is no way to determine that by > inspecting > > > the identifiers, aside from scheme-defined equivalence. > > > > I think I get what you are saying, but I want to be sure. Basically > > you are saying that each of the following URLs point to a DIFFERENT > > resource as far as REST is concerned, correct: > > No. Why would I have said "For two resources to be the same, > ..." if I thought that changing a URI always resulted in > different resources? > They might be different resources, the client generally won't > be able to figure that out, so the only safe assumption is > that they are different resources until stated otherwise by > the server. I guess my terminology was confusing. Are you saying then that I CAN have a fully RESTful app that uses the following three URLs the same? http://www.foo.com http://www.foo.com/ http://www.foo.com/index.php BTW, I am differentiating from what REST requires of a resource to the broader view of resources on the web because we all know the web is not 100% RESTful as globally implemented by many domain holders. > The ultimate goal is to choose the one true identifier that > will always map to the intended resource such that the link > author (and the millions of people who subsequently copy or > bookmark that link) never have to change the link. I think you are coupling constraints. I could easily have three indentifiers that point to the same resource (as you implied above, I think) that ALL never change. I could also limit to one single identifier per resource, and then arbitrarily change it. So I don't think narrowing the focus to just one identifier necessarily does any better job of ensuring that that one identifier never changes. Yes chances are that in some cases one is easier to keep from changing than multiple, but if the URL authority architects its site in advance with multiple persistent URLs per resource in mind then five URLs pointing to the same resource are no less likely to change then two URLs per resource or even one URLs per resource. That said, over than the obvious, what's wrong with links changing? And I'm not being pedantic, I'm exploring out of the box thinking. Is it bad to change a link if that link returns a 301 redirect to the new link? If so, why? I have other thoughts on the matter of broken links, but as I plan to blog them I'll hold them for now; I'll email the list after I make those posts. > If there are many URIs for a given resource, the best > implementation is for all of the other URIs to redirect to > the one URI that is deemed to be "best" for the resource's > unique semantics. The reason for that is not REST or Web > Architecture (though both are specifically designed to enable > it): the reason is network economics as expressed by power > laws, Metcalfe's law, PageRank, and a hundred other > restatements of the factors that place value on social networks. I see the "best practices" you are citing as optimizing for certain benefits at the expense of others. I'm currently doing research to determine how we can optimize for both because I don't think we should have to settle for one and not the other. > REST works because it makes absolutely no attempt to > understand what the resource is, how it might be implemented, > or the scope of how it will change over time. It eliminates > the semantic burden of understanding by focusing only on the > interface as a means of hiding knowledge from the other side, > yet communicating all that needs to be said in the same way > that two people communicate -- tossing representations across > the gap with a relatively small number of pitch inflections > to indicate what is expected in return. In short, REST > doesn't care what the resource is or how many URIs identify > the same resource, because to care would require > understanding that would lead to coupling which is more > dangerous than inefficiency. That sounds reasonable. What do you consider to be "the pitch inflections?" Content type? Other? > Such communication theory is a lot more abstract than REST, > REST is a lot more abstract than Web Architecture (URI, HTTP, > HTML, ...), and Web Architecture is somewhat more abstract > than any current implementation of that architecture (Apache > httpd, Mozilla Firefox, etc.). And yet they all need to > influence each other, in various ways, when we attempt to > design changes to a living system. BTW, throughout history there have been "specifications" designed to constrict human behaviour. One of the better known sets of specifications in the Western world are "The 10 Commandments" and all it's derivatives. And as with most other specifications that have followed, including those you mention, they really were designed for the good of society. However as we all knowm people often don't follow them, and sometimes they have good reasons and sometimes bad. My interest is in finding ways to either change the enviroment so that the reasons they don't follow the rules no longer apply, or to augment the rules so that new goals can be achieved in other ways while still maintaining the fundamental benefits to society. In cases where the rules are "handed down by God" my approach doesn't play too well but when we are talking technology one of the beauties is that we can often add another layer to meet the new goals without sacrificing the old ones. As with any rules or sets of specifications, there are those who are steeped in them and have working with them for years. Those people have often unconsciously accepted many constraints. Progress is often made by people who "don't know any better" than to questions those constraints that have been unconsciously accepted by so many. Understandably, those with significant experience are upset by the things that the newcomers question for hopefully obvious reasons. And the newcomers usually find that the constraints embraced by the experienced are constraints for good reason. But every so often the newcomers are able to identify unconsciously accepted constraints that really were not constraints after all, and this is when progress and innovation is made. Roy, I am one of those newcomers and you are clearly highly experienced. I have identified numerous issues that seem to plague the web community related to my research on URLs, and I researching ways to address some of those issues. I ask in advance for you to forgive me of any questions I may ask or any concepts I may propose now or in the future even if you feel that you've already been down that path with others. As I go about trying to identify some of those unconsciously accepted constraints I will undoubtedly learn the benefits of most of those constaints, but I also hope to uncover some that really were not constraints at all. Understand I have gained great respect for you and the community the brief time I've been participating and that whatever my future involvement, my goal is to ensure that I uphold the web's principles while doing nothing to harm it either. Please remember that at one point in time you were the newcomer, and you have great ideas which turned out to be good. Please give me the same opportunities you were given to research and then question, and it is possible I may ultimately make some important contributions myself. Thanks for listening. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
On Jan 4, 2007, at 2:16 PM, Roy T. Fielding wrote: > On Jan 3, 2007, at 11:48 PM, Mike Schinkel wrote: > >> Roy T. Fielding wrote: >>> The resource in HTTP is the mapping from the entire identifier >>> (including scheme, authority, path, and query) to a set of values. >>> For two resources to be the same, they must map to the same set >>> of values for all time. There is no way to determine that by >>> inspecting the identifiers, aside from scheme-defined equivalence. >> >> I think I get what you are saying, but I want to be sure. Basically >> you are >> saying that each of the following URLs point to a DIFFERENT >> resource as far >> as REST is concerned, correct: > > No. Why would I have said "For two resources to be the same, ..." if > I thought that changing a URI always resulted in different resources? > They might be different resources, the client generally won't be able > to figure that out, so the only safe assumption is that they are > different resources until stated otherwise by the server. > > This is Web Architecture. Resources are an abstraction -- a source of > goodness as perceived by the person who linked to that resource that > is in the form of a value-giver over time. There are no resources on > the Web -- only senders and receivers of representations that have > the effect of evaluating a resource mapping at invocation time, > thereby becoming "the resource" as we perceive it over time even > though we all know it is just a finite data server at any single > point in time. If I understand this correctly, the term resource in the REST and HTTP context means a thing of value as perceived by the people using the system. If all you have is a bunch of URIs, you can't make any assumptions about whether they refer to the same resource or not. But if you have a bunch of URIs plus more semantic information about the resources they represent, then you can make such assumptions. In that case, Steve should be OK saying things like: GET /resource Retrieve the entire resource. Query parameters may be available to retrieve only parts of the resource. Yes, in hindsight perhaps the HTTP spec shouldn't have said that about the query parameters. But Steve can declare in his article that for any system that implements his Entity pattern as he defines it, all URLs that are identical up to the query parameters refer to the same resource. And in my case, I can say that in my architecture an article is a resource, and that any article URIs that are identical up to the query parameters refer to the same article resource. Any objections to that way of treating the term resource? Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Hi Roy, On Jan 3, 2007, at 4:52 PM, Roy T. Fielding wrote: > The resource in HTTP is the mapping from the entire identifier > (including scheme, authority, path, and query) to a set of values. > For two resources to be the same, they must map to the same set > of values for all time. There is no way to determine that by > inspecting the identifiers, aside from scheme-defined equivalence. > I assume here by set of "values" you don't mean set of representations returned. These are the values as known to the server, from which it creates and sends representations down to clients. Is that correct? Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
On Jan 4, 2007, at 2:16 PM, Roy T. Fielding wrote: > The ultimate goal is to choose the one true identifier that will > always map to the intended resource, such that the link author (and > the millions of people who subsequently copy or bookmark that link) > never have to change the link. Of course, we rarely achieve the > ultimate goal the first time, and sometimes there are many paths to > the ultimate goal that get explored along the way, and sometimes > people move the goal posts. > > If there are many URIs for a given resource, the best implementation > is for all of the other URIs to redirect to the one URI that is > deemed to be "best" for the resource's unique semantics. The reason > for that is not REST or Web Architecture (though both are specifically > designed to enable it): the reason is network economics as expressed > by power laws, Metcalfe's law, PageRank, and a hundred other > restatements of the factors that place value on social networks. I understand how that should help search engines count how many references there are for calculating page rank, but could you elaborate how canonical URLs affect "Metcalf's law, power laws, and a hunder other restatements..." I did take the canonical URL thing to heart in our new architecture. It may be a bit over-designed but we generate controllers that canonicalize by dropping trailing slashes and even reordering the query parameters. For example: http://www.artima.com/articles?t=java&p=4&o=a http://www.artima.com/articles?p=4&o=a&t=java http://www.artima.com/articles?t=java&o=a&p=4 http://www.artima.com/articles?p=4&t=java&o=a http://www.artima.com/articles?o=a&p=4&t=java http://www.artima.com/articles/?t=java&p=4&o=a http://www.artima.com/articles/?p=4&o=a&t=java http://www.artima.com/articles/?t=java&o=a&p=4 http://www.artima.com/articles/?p=4&t=java&o=a http://www.artima.com/articles/?o=a&p=4&t=java http://www.artima.com/articles/?o=a&t=java&p=4 All get redirected to: http://www.artima.com/articles?o=a&t=java&p=4 Currently there is one URI that doesn't get redirected to the canonical form. It's on my list to fix that: http://artima.com/articles?o=a&t=java&p=4 Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Bill, Something that I'm sure confuses a lot of folks is the fact that in his dissertation, Roy defined "Resource" as a mapping. A mapping is a relation, not a thing. But in common usage, we talk about resources as if they were things. A la REST, a resource maps individual points in time each to a set of values which are indeed representations. A URI only identifies the mapping. (BTW, when Roy says the URI in its entirety maps to a set of values, he is taking a small liberty with his own REST model, I think.) When you talk about "part of a resource" realize that formally you are talking about part of the relation above. Which part? The domain (all the mapped time points)? The range (all the representations for all times, but without the time dimenstion)? Certain tuples (all mappings for 1994)? I am sure you mean none of the above. In REST, there is no built-in notion of resource decomposition the way you would think of decomposing your car or your publications into parts. And so there can also be no standard for mapping conjugations of URIs onto them. Sure hope this helps, Walden ----- Original Message ----- From: "Bill Venners" <bv-svp@...> To: "Roy T. Fielding" <fielding@...> Cc: "Steve G. Bjorg" <steveb@...>; <rest-discuss@yahoogroups.com> Sent: Thursday, January 04, 2007 7:50 PM Subject: Re: [rest-discuss] Re: Request for feedback: REST for the Rest of Us : Hi Roy, : : On Jan 3, 2007, at 4:52 PM, Roy T. Fielding wrote: : : > The resource in HTTP is the mapping from the entire identifier : > (including scheme, authority, path, and query) to a set of values. : > For two resources to be the same, they must map to the same set : > of values for all time. There is no way to determine that by : > inspecting the identifiers, aside from scheme-defined equivalence. : > : I assume here by set of "values" you don't mean set of : representations returned. These are the values as known to the : server, from which it creates and sends representations down to : clients. Is that correct? : : Bill : ---- : Bill Venners : President : Artima, Inc. : http://www.artima.com : : : : : : __________ NOD32 1957 (20070104) Information __________ : : This message was checked by NOD32 antivirus system. : http://www.eset.com : :
On Jan 4, 2007, at 4:50 PM, Bill Venners wrote:
> On Jan 3, 2007, at 4:52 PM, Roy T. Fielding wrote:
>
>> The resource in HTTP is the mapping from the entire identifier
>> (including scheme, authority, path, and query) to a set of values.
>> For two resources to be the same, they must map to the same set
>> of values for all time. There is no way to determine that by
>> inspecting the identifiers, aside from scheme-defined equivalence.
>>
> I assume here by set of "values" you don't mean set of
> representations returned. These are the values as known to the
> server, from which it creates and sends representations down to
> clients. Is that correct?
resource values at time t = { representations | URIs }
also known as representations or redirects.
....Roy
Hi Walden, On Jan 4, 2007, at 5:12 PM, Walden Mathews wrote: > Bill, > > Something that I'm sure confuses a lot of folks is the fact > that in his dissertation, Roy defined "Resource" as a mapping. > A mapping is a relation, not a thing. But in common usage, > we talk about resources as if they were things. > > A la REST, a resource maps individual points in time each to > a set of values which are indeed representations. A URI > only identifies the mapping. (BTW, when Roy says the URI > in its entirety maps to a set of values, he is taking a small > liberty with his own REST model, I think.) > Hmm. Let me ask a pointed question to see if I understand this. By Roy's definition of resource as mapping, is it true that: http://www.artima.com/articles and http://www.artima.com/articles?p=4 Must at any time t return the same representation for those two URIs to refer to the same resource? That does not hold true for these two URIs. They would return different things at the same time t. (I'm trying to figure out if the "collection of articles" conceptual thing equates to a "resource" by Roy's definition.) Thanks. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com > When you talk about "part of a resource" realize that formally > you are talking about part of the relation above. Which part? > The domain (all the mapped time points)? The range (all the > representations for all times, but without the time dimenstion)? > Certain tuples (all mappings for 1994)? > > I am sure you mean none of the above. In REST, there is no > built-in notion of resource decomposition the way you would > think of decomposing your car or your publications into parts. > And so there can also be no standard for mapping conjugations > of URIs onto them. > > Sure hope this helps, > > Walden > > > ----- Original Message ----- > From: "Bill Venners" <bv-svp@...> > To: "Roy T. Fielding" <fielding@...> > Cc: "Steve G. Bjorg" <steveb@...>; <rest- > discuss@yahoogroups.com> > Sent: Thursday, January 04, 2007 7:50 PM > Subject: Re: [rest-discuss] Re: Request for feedback: REST for the > Rest of > Us > > > : Hi Roy, > : > : On Jan 3, 2007, at 4:52 PM, Roy T. Fielding wrote: > : > : > The resource in HTTP is the mapping from the entire identifier > : > (including scheme, authority, path, and query) to a set of values. > : > For two resources to be the same, they must map to the same set > : > of values for all time. There is no way to determine that by > : > inspecting the identifiers, aside from scheme-defined equivalence. > : > > : I assume here by set of "values" you don't mean set of > : representations returned. These are the values as known to the > : server, from which it creates and sends representations down to > : clients. Is that correct? > : > : Bill > : ---- > : Bill Venners > : President > : Artima, Inc. > : http://www.artima.com > : > : > : > : > : > : __________ NOD32 1957 (20070104) Information __________ > : > : This message was checked by NOD32 antivirus system. > : http://www.eset.com > : > : > > > > > Yahoo! Groups Links > > > >
On Jan 4, 2007, at 5:37 PM, Roy T. Fielding wrote: > On Jan 4, 2007, at 4:56 PM, Bill Venners wrote: >> All get redirected to: >> >> http://www.artima.com/articles?o=a&t=java&p=4 > > Why don't you redirect to a permalink style URI? The ? will reduce > your cache effectiveness, and is mighty ugly. Well, I guess for > "give me a list of java articles sorted by title" that is okay, > since the articles themselves seem to have permalinks. Note that > > http://www.artima.com/articles/java/index;date;p4 > > is short and says more. YMMV. > How does the question mark reduce cache effectiveness nowadays? I remember reading that some early cache implementations did not cache the responses of URIs with query parameters, because they considered that "dynamic content." But I thought that was old history. As far as the URI you suggest, my hope was to allow users to glean insight into the information architecture of the site by the URI. So / articles would be a collection of articles. /articles/ why_put_and_delete would be an article about PUT and DELETE, etc. Anything under /articles/ needs to be an article, because that helps users understand the information architecture. So /articles/java meaning a collection of articles about Java would break the consistency of that mental model for the user. /articles/java should be an article about Java. I also wanted users to be able to chop off pieces of a URI and always still get something. /forums/java_answers/1234/9876 would be a forum message. They could chop of the last part, yielding /forums/ java_answers/1234, which would give them the forum topic containing that message. Chopping again would yield /forums/java_answers, which would give them a list of topics in the Java Answers forum. Chopping again would yield /forums, which would give them a list of forums. I think it is exactly the Container pattern from Scott's article, except that I don't support PUT and DELETE. Each point in the hierarchy is a conceptual thing from the user's perspective. /forums is a collection of forums. /forums/java_answers is one of those forums, which is a collection of topics. /forums/ java_answers/1234 is one of those topics, which is a collection of messages. And /forums/java_answers/1234/9876 is one of those messages. At the end of any of these "absolute paths" could be a ? and some parameters that the user can think of as data that will select a particular "view" of the conceptual thing identified by the absolute path. I also wanted users to be able to chop off the entire query portion too, and still get something. But I agree ?s and &s and =s are ugly in those URIs. I think I need the query parameter keys, but in our new architecture, they are always one character. So instead of: http://www.artima.com/articles?o=a&t=java&p=4 I suppose I could do: http://www.artima.com/articles;oa;tjava;p4 Shorter, but I'm not sure it's any less ugly. I'm also not sure it is as obvious to the user that the conceptual thing identified by this URI is http://www.artima.com/articles. Perhaps if there was a different character separating the query parameters, as in: http://www.artima.com/articles;oa,tjava,p4 Or maybe: http://www.artima.com/articles;o=a,t=java,p=4 Or even: http://www.artima.com/articles;otp=a,java,4 >> Currently there is one URI that doesn't get redirected to the >> canonical form. It's on my list to fix that: >> >> http://artima.com/articles?o=a&t=java&p=4 > > Yep, I have the same problem with aliases to gbiv.com. Unfortunately, > my ISP seems to use mod_rewrite's "final rule" within the virtual host > section, so the easy fix doesn't work for me. > One other one that's on my list to fix is this one: http://www.artima.com/articles? It doesn't currently redirect to get rid of the ? Bill
I'd like to get the input from everyone on issues regarding URL structure choices for REST-based systems. What are the pros and cons of the following different sets of URLs: http://www.foo.com/users/ http://www.foo.com/users/john-smith/ http://www.foo.com/users/john-smith/cell-phone/ Vs. http://www.foo.com/?section=users http://www.foo.com/?section=users&user=john-smith http://www.foo.com/?section=users&user=john-smith&phone=cell-phone I'm looking to create an exhaustive list of pros & cons. Thanks in advance for the help. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Steve G. Bjorg wrote: > I've created a short tutorial on REST and some common > resource/service patterns. I would welcome feedback > on the accuracy of the content. I printed and read you're your article and made a lot of notes for my questions and comments but the next day I say others had given you a flurry of feedback. Rather than duplicate any of their feedback, can you let us know when you have a revised version so I can compare my notes to see if I have any outstanding questions/comments? One thing I don't think anyone else mentioned was in the section about "GET", you say: "Let's assume we have an image as a resource at 'mywedding.png'" Yet all your examples with that URL use a ".img" extension: GET http://myserver/myphotos/mywedding.img Was this a typo or am I missing something? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
On Jan 4, 2007, at 4:56 PM, Bill Venners wrote:
>Roy wrote:
>> If there are many URIs for a given resource, the best implementation
>> is for all of the other URIs to redirect to the one URI that is
>> deemed to be "best" for the resource's unique semantics. The reason
>> for that is not REST or Web Architecture (though both are
>> specifically
>> designed to enable it): the reason is network economics as expressed
>> by power laws, Metcalfe's law, PageRank, and a hundred other
>> restatements of the factors that place value on social networks.
>
> I understand how that should help search engines count how many
> references there are for calculating page rank, but could you
> elaborate how canonical URLs affect "Metcalf's law, power laws, and
> a hunder other restatements..."
Too hard to summarize. This might help explain it.
http://en.wikipedia.org/wiki/Scale-free_network
http://en.wikipedia.org/wiki/PageRank
Canonical URLs means that new links are created to the same URL as
previous links, which increases linked-to values, which increases
both the perceived node value of a hub and the corresponding values
of the nodes that link to that hub.
> I did take the canonical URL thing to heart in our new
> architecture. It may be a bit over-designed but we generate
> controllers that canonicalize by dropping trailing slashes and even
> reordering the query parameters. For example:
>
> http://www.artima.com/articles?t=java&p=4&o=a
> http://www.artima.com/articles?p=4&o=a&t=java
> http://www.artima.com/articles?t=java&o=a&p=4
> http://www.artima.com/articles?p=4&t=java&o=a
> http://www.artima.com/articles?o=a&p=4&t=java
> http://www.artima.com/articles/?t=java&p=4&o=a
> http://www.artima.com/articles/?p=4&o=a&t=java
> http://www.artima.com/articles/?t=java&o=a&p=4
> http://www.artima.com/articles/?p=4&t=java&o=a
> http://www.artima.com/articles/?o=a&p=4&t=java
> http://www.artima.com/articles/?o=a&t=java&p=4
>
> All get redirected to:
>
> http://www.artima.com/articles?o=a&t=java&p=4
Why don't you redirect to a permalink style URI? The ? will reduce
your cache effectiveness, and is mighty ugly. Well, I guess for
"give me a list of java articles sorted by title" that is okay,
since the articles themselves seem to have permalinks. Note that
http://www.artima.com/articles/java/index;date;p4
is short and says more. YMMV.
> Currently there is one URI that doesn't get redirected to the
> canonical form. It's on my list to fix that:
>
> http://artima.com/articles?o=a&t=java&p=4
Yep, I have the same problem with aliases to gbiv.com. Unfortunately,
my ISP seems to use mod_rewrite's "final rule" within the virtual host
section, so the easy fix doesn't work for me.
....Roy
On Jan 4, 2007, at 5:12 PM, Walden Mathews wrote: > Something that I'm sure confuses a lot of folks is the fact > that in his dissertation, Roy defined "Resource" as a mapping. > A mapping is a relation, not a thing. But in common usage, > we talk about resources as if they were things. Not really, though I agree most people think that way. Natural resources are an abstraction as well -- sure, they are "trees" or "oil" or "gold" or what-have-you at any given point in time, but when we say "that land is rich in resources" we are talking about the ongoing production source of "stuff we want", not individual trees, barrels, or nuggets. Likewise, management often plays allocation games with resources in the abstract before actually mapping those "resources" to people. YMMV. I agree about the confusion. It would have been easier for me to use a different name, but then people would still think that URLs identified files. At least the acronym wouldn't have changed. ....Roy
Bill Venners wrote: > On Jan 4, 2007, at 5:37 PM, Roy T. Fielding wrote: > >> On Jan 4, 2007, at 4:56 PM, Bill Venners wrote: >>> All get redirected to: >>> >>> http://www.artima.com/articles?o=a&t=java&p=4 >> Why don't you redirect to a permalink style URI? The ? will reduce >> your cache effectiveness, and is mighty ugly. Well, I guess for >> "give me a list of java articles sorted by title" that is okay, >> since the articles themselves seem to have permalinks. Note that >> >> http://www.artima.com/articles/java/index;date;p4 >> >> is short and says more. YMMV. >> > How does the question mark reduce cache effectiveness nowadays? I > remember reading that some early cache implementations did not cache > the responses of URIs with query parameters, because they considered > that "dynamic content." But I thought that was old history. If the cache is compliant (and most seem to be reasonably so in this regard at least) then the question mark only affects the default behaviour in the case of there being no cache-control headers.
On Thu, 2007-01-04 at 16:56 -0800, Bill Venners wrote: > I did take the canonical URL thing to heart in our new architecture. > It may be a bit over-designed but we generate controllers that > canonicalize by dropping trailing slashes and even reordering the > query parameters. For example: > http://www.artima.com/articles?t=java&p=4&o=a > http://www.artima.com/articles?p=4&o=a&t=java ... > http://www.artima.com/articles/?o=a&t=java&p=4 > All get redirected to: > http://www.artima.com/articles?o=a&t=java&p=4 I'm not a fan of redirection for redirection's sake. If it makes for a much simpler server implementation so be it, but in general I think it is poor manners to ask a client to rephrase a request the server understands perfectly well. In all of these cases I would be strongly inclined to process the request normally and repond with the relevant content. I would try to include a link in the content and/or headers to the bookmark or "permalink" url for future reference, but I don't want to clog up the network with repetitions of the same request through the redirection mechanism. I see redirection primarily as a mechanism to defer to a server entity that understands the request properly or to deal with a deprecated url. Roy, you wrote: > No. Why would I have said "For two resources to be the same, ..." if > I thought that changing a URI always resulted in different resources? > They might be different resources, the client generally won't be able > to figure that out, so the only safe assumption is that they are > different resources until stated otherwise by the server. I worry about the notion of a resource being too airy-fairy. I take a fairly simple approach in my conversation. If the urls are different we are talking about two different resources. It might happen that they demarcate the same application state, but client's can't know that. I would suggest that even server's don't really know that. A mapping from URL to representation might be here today but gone after the restructure tomorrow. One of the aliases might become a redirection next week. Severs cannot say with certainty over the long-haul that multiple urls map to the same application state. In that context I am very wary of language that talks about two resources being the "same", even if the might happen to be equivalent for the forseeable future. I talk about different urls always referring to different resources. At any particular point in time the url demarcates a specific subset of application state, though the server is free to change the mapping from url to application state over time. Interacting with the resource returns, replaces, adds to, destroys, or otherwise operates on the demarcated application state by transferring representations of the state being communicated. Different representations encode the state being communicated into different document types with different tradeoffs with respect to fidelity of semantics, document simplicity, generality of applicability and other design factors. Benjamin.
On Thu, 2007-01-04 at 06:37 -0600, Hugh Winkler wrote: > On 1/4/07, Bob Haugen <bob.haugen@...> wrote: > > On 1/4/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > > > On Wed, 2007-01-03 at 20:06 -0600, Hugh Winkler wrote: > > > > Real world experience: RosettaNet PIP 3A4. Thoroughly specified > > > > schemas describing almost every possible Purchase Order Request > and > > > > Response. Guess how long it takes one large computer > manufacturer to > > > > shake out enough impedance mismatches to get going with each new > > > > trading partner? 30-60 days, and that is a dedicated team of > experts > > > > exchanging test messages, "validating" them, discovering and > resolving > > > > the semantic mismatches in even the "valid" documents. > > > Bad standards exist, therefore standards are bad. Nice logic. > > I don't think that's a valid conclusion. RosettaNet is not a bad > > standard. As I wrote in another post, it is the third generation of > > ecommerce standards that each tried to learn from and resolve the > > problems of the previous generation. Each was better than its > > predecessor. So by the Microsoft rule, RosettaNet should have been > > pretty good. > > It's just a difficult problem. > Exactly. RosettaNet is a great standard... the apotheosis of this > style of exchanging full documents. My point is that style is not > scalable, and that APP is using the same approach, on a smaller scale. I would suggest that by definition any standard that takes "30-60 days" of "experts exchanging test messages, "validating" them, discovering and resolving the semantic mismatches in even the "valid" documents" is a bad standard. That doesn't mean the people are bad. It doesn't mean that the ideas were bad. It just means that all the necessary ingredients of standard prepration haven't come together in that problem domain as yet. If it is working at all for anyone, I suppose it can't be all bad. Over several more versions it may either become a good standard or form the groundwork for a family of good standards. However it can't presently be called good in the way that html is good. HTML works in a now familar problem domain and can rely on common understanding and shared motivation between participants about what features are necessary and what they should look like. Standards like these can be scaled up with a minimum of effort to a large number of users. That is not to say that HTML was always a good standard. It needed to evolve over time as more users placed more demands on it, and it did. It also drew on earlier document standards and deployed hypertext systems as a grounding for development. Let me take your comments at their merit though. Hugh, you suggest that the style of communication where people get together and agree on a document format before communication takes place is not scalable. I'll pay that in some respects. In the sense that we have clearly have good standards today that scale up remarkably well, you are clearly mistaken. However the process of standardisation is only as scalable as available processes of human interaction and participation. That is to say it tends to fracture on socio-political grounds. Business is a hard problem-domain to crack because there are so many people with different ideas about what business is. They each have their own ways of working. They have their own management styles, styles of dealing with clients and suppliers. They have their own vocabularies, so it is hard coming up with a common vocabulary that means the same thing to everyone. It's like coming up with a document format for common law contracts. It can only be defined at a very high level with the real semantics being hidden somewhere in human-readable text. Atom is not in the business league. It is smaller and targetted to a set of people who already know what a "blog" is, and can talk about it in terms familiar to each other. It has still to pass the test of time that HTML has and see off challenges such as hAtom, but I do tentatively consider atom to be a good document standard. Benjamin.
Bill Venners wrote: > Hmm. Let me ask a pointed question to see if I understand this. By > Roy's definition of resource as mapping, is it true that: > > http://www.artima.com/articles > > and > > http://www.artima.com/articles?p=4 > > Must at any time t return the same representation for those two URIs > to refer to the same resource? > No, Content negotiation could lead to even http://www.artima.com/articles alone returning different representations to different clients at the same time. I think it's helpful to stop trying to imagine what the resource is. See http://cafe.elharo.com/web/rest-is-like-quantum-mechanics/ All you know is representations, nothing more. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 12/22/06, Hugh Winkler <hughw@...> wrote:
> That darn Google... breaking the web again.
In a fit of irony, even Google is guilty! See the 'delete' link on the
Google Alerts management console...
http://www.google.com/alerts/remove?s={token}
On 1/5/07, Hugh Winkler <hughw@...> wrote:
> Oh -- sorry Alan -- you may be right -- I was commenting on the page I
> got redirected to when I clicked the link -- need to try after
> substituting for the {token} don't I?
Yes - I didn't want to publicise one of my own tokens.
Although it does raise a dialog, if you right-click & copy the link
then put that straight into the address bar (effectively what a robot
would do following hrefs) ... the item is deleted
Yep... GET on the URL after substituting the token of one of my
alerts, deleted the alert.
On 1/5/07, Hugh Winkler <hughw@...> wrote:
> Oh -- sorry Alan -- you may be right -- I was commenting on the page I
> got redirected to when I clicked the link -- need to try after
> substituting for the {token} don't I?
>
>
> On 1/5/07, Hugh Winkler <hughw@...> wrote:
> > On 1/5/07, Alan Dean <alan.dean@...> wrote:
> > > On 12/22/06, Hugh Winkler <hughw@...> wrote:
> > > > That darn Google... breaking the web again.
> > >
> > > In a fit of irony, even Google is guilty! See the 'delete' link on the
> > > Google Alerts management console...
> > >
> > > http://www.google.com/alerts/remove?s={token}
> > >
> > That one's OK, because it doesn't delete when you follow the link. It
> > invokes an onclick handler that pops up a dialog, "Are you sure you
> > want to delete....?". Only the Javascript function executes the
> > delete.
> >
> > Hugh
> >
>
>
> --
> Hugh Winkler
> Wellstorm Development
>
> http://www.wellstorm.com/
> +1 512 694 4795 mobile (preferred)
> +1 512 264 3998 office
>
--
Hugh Winkler
Wellstorm Development
http://www.wellstorm.com/
+1 512 694 4795 mobile (preferred)
+1 512 264 3998 office
Oh -- sorry Alan -- you may be right -- I was commenting on the page I
got redirected to when I clicked the link -- need to try after
substituting for the {token} don't I?
On 1/5/07, Hugh Winkler <hughw@...> wrote:
> On 1/5/07, Alan Dean <alan.dean@...> wrote:
> > On 12/22/06, Hugh Winkler <hughw@...> wrote:
> > > That darn Google... breaking the web again.
> >
> > In a fit of irony, even Google is guilty! See the 'delete' link on the
> > Google Alerts management console...
> >
> > http://www.google.com/alerts/remove?s={token}
> >
> That one's OK, because it doesn't delete when you follow the link. It
> invokes an onclick handler that pops up a dialog, "Are you sure you
> want to delete....?". Only the Javascript function executes the
> delete.
>
> Hugh
>
--
Hugh Winkler
Wellstorm Development
http://www.wellstorm.com/
+1 512 694 4795 mobile (preferred)
+1 512 264 3998 office
On 1/5/07, Hugh Winkler <hughw@...> wrote: > Yep... GET on the URL after substituting the token of one of my > alerts, deleted the alert. :-) A good lesson on why client-side validation is not to be trusted
On 1/5/07, Alan Dean <alan.dean@...> wrote:
> On 12/22/06, Hugh Winkler <hughw@...> wrote:
> > That darn Google... breaking the web again.
>
> In a fit of irony, even Google is guilty! See the 'delete' link on the
> Google Alerts management console...
>
> http://www.google.com/alerts/remove?s={token}
>
That one's OK, because it doesn't delete when you follow the link. It
invokes an onclick handler that pops up a dialog, "Are you sure you
want to delete....?". Only the Javascript function executes the
delete.
Hugh
On Jan 5, 2007, at 5:40 AM, Elliotte Harold wrote: > Bill Venners wrote: > >> Hmm. Let me ask a pointed question to see if I understand this. >> By Roy's definition of resource as mapping, is it true that: >> http://www.artima.com/articles >> and >> http://www.artima.com/articles?p=4 >> Must at any time t return the same representation for those two >> URIs to refer to the same resource? > > No, Content negotiation could lead to even http://www.artima.com/ > articles alone returning different representations to different > clients at the same time. > It is difficult to talk about these things. My intent was to include the possibility of content negotiation in my question. A more pointed question, then, would be: By Roy's definition of resource as mapping, is it true that: http://www.artima.com/articles and http://www.artimal.com/articles?p=4 Must at any time t return the same representation given the same request headers to refer to the same resource? > I think it's helpful to stop trying to imagine what the resource > is. See > > http://cafe.elharo.com/web/rest-is-like-quantum-mechanics/ > > All you know is representations, nothing more. > I am trying to understand the definition of resource in the HTTP and REST context, that's all. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Hi Mike, > On Jan 4, 2007, at 9:02 PM, Mike Schinkel wrote: > I'd like to get the input from everyone on issues regarding URL > structure > choices for REST-based systems. What are the pros and cons of the > following > different sets of URLs: I'm a static resource bigot, so I'll answer from that point of view and let someone else defend the other side. :-) A. Static > http://www.foo.com/users/ > http://www.foo.com/users/john-smith/ > http://www.foo.com/users/john-smith/cell-phone/ + shorter + implies fixed resources + implies a single unique resource B. Dynamic (Long) > http://www.foo.com/?section=users > http://www.foo.com/?section=users&user=john-smith > http://www.foo.com/?section=users&user=john-smith&phone=cell-phone I actually think this is the wrong contrast. I would propose instead: C. Dynamic (Short) > http://www.foo.com/?user=* > http://www.foo.com/?user=john-smith > http://www.foo.com/?user=john-smith&phone=cell-phone Since "section = users" seems redundant with "user=". Given "A" and "C", I actually think which is better becomes context- dependent. If you have static tree of resources the end user drills down, then "A" is the most natural. If you have a huge database where the user could be sorting on many different fields, then "C" might be the simpler construction. Hope this helps. :-) -enp
This is a FAQ, Mike. I'm sure a search for "query" in the archives will turn up a gold mine of pros/cons. If you could put what you find on the RESTwiki (the FAQ page in particular), it would be appreciated. Cheers, Mark. On 1/5/07, Mike Schinkel <mikeschinkel@...> wrote: > I'd like to get the input from everyone on issues regarding URL structure > choices for REST-based systems. What are the pros and cons of the following > different sets of URLs: > > http://www.foo.com/users/ > http://www.foo.com/users/john-smith/ > http://www.foo.com/users/john-smith/cell-phone/ > > Vs. > http://www.foo.com/?section=users > http://www.foo.com/?section=users&user=john-smith > http://www.foo.com/?section=users&user=john-smith&phone=cell-phone > > I'm looking to create an exhaustive list of pros & cons. > > Thanks in advance for the help. > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org/ > > > > > > Yahoo! Groups Links > > > > -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Roy T. Fielding wrote: > On Jan 3, 2007, at 5:13 PM, Bill de hOra wrote: >> Roy T. Fielding wrote: >> > >> > The resource in HTTP is the mapping from the entire identifier >> > (including scheme, authority, path, and query) to a set of values. >> >> I'm reluctant to quibble with an editor, but I was referring to the >> means of identification as laid out, and not the substance of the >> resource. > > What is the difference? ;-) The same as the one between UR and URI :) Anyway, help me out here - if you're saying the resource identifier includes the query part, does that conflict with the wording in 3.2.2, (which I read as excluding it), or am I misreading one (or both)? cheers Bill
Hugh Winkler wrote: > > > On 1/4/07, Bob Haugen <bob.haugen@... > <mailto:bob.haugen%40gmail.com>> wrote: > > On 1/4/07, Benjamin Carlyle <benjamincarlyle@... > <mailto:benjamincarlyle%40optusnet.com.au>> wrote: > > > On Wed, 2007-01-03 at 20:06 -0600, Hugh Winkler wrote: > > > > Real world experience: RosettaNet PIP 3A4. Thoroughly specified > > > > schemas describing almost every possible Purchase Order Request and > > > > Response. Guess how long it takes one large computer manufacturer to > > > > shake out enough impedance mismatches to get going with each new > > > > trading partner? 30-60 days, and that is a dedicated team of experts > > > > exchanging test messages, "validating" them, discovering and > resolving > > > > the semantic mismatches in even the "valid" documents. > > > > > Bad standards exist, therefore standards are bad. Nice logic. > > > > I don't think that's a valid conclusion. RosettaNet is not a bad > > standard. As I wrote in another post, it is the third generation of > > ecommerce standards that each tried to learn from and resolve the > > problems of the previous generation. Each was better than its > > predecessor. So by the Microsoft rule, RosettaNet should have been > > pretty good. > > > > It's just a difficult problem. > > > > > > Exactly. RosettaNet is a great standard... the apotheosis of this > style of exchanging full documents. My point is that style is not > scalable, and that APP is using the same approach, on a smaller scale. I'm not sure where to begin with this. I dislike muddled conversations when it comes to semantics. All the business processing standards I've seen are inherently non-scalable, if by non-scalable you mean the number of actors that can share information without resorting to effort in out of band agreements. That XForms has a means of exception management (show it to a person) and does schema validation in advance via constraints instead of defensive programming doesn't have me thinking it's any more applicable or valuable than using Atom - indeed it has me thinking it's going to be riddled with security holes because to honor an xform constraint you need a trustworthy client. In the general case this a symbolic AI problem. If you want to move further along, the applicable state of the art here are denotational semantics as used in KR, RDF interchange, or agent based systems like FIPA. No-one seems to be sure how to design such things in a way that they garner wide adoption on existing infrastructure. cheers Bill
Mike Schinkel wrote: > I printed and read you're your article and made a lot of notes for my > questions and comments but the next day I say others had given you a flurry > of feedback. Rather than duplicate any of their feedback, can you let us > know when you have a revised version so I can compare my notes to see if I > have any outstanding questions/comments? > > One thing I don't think anyone else mentioned was in the section about > "GET", you say: > > "Let's assume we have an image as a resource at > 'mywedding.png'" > > Yet all your examples with that URL use a ".img" extension: > > GET http://myserver/myphotos/mywedding.img > > Was this a typo or am I missing something? Yes, that's a typo. It's fixed now, thanks. For now, I'm collecting the feedback on the feedback page [1]. I'll integrate it over the weekend and post about it here when it's ready. Many thanks to everyone who has taken the time to send me their comments! Cheers, - Steve [1] http://doc.opengarden.org/Articles/REST_for_the_Rest_of_Us/Feedback -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Unfortunately, the REST wiki seems to be unreliable as hell (sorry), but IIRC (and my bookmark does not deceive me) this http://rest.blueoxen.net/cgi-bin/wiki.pl?PathsAndQueryStrings is a page I've found useful in the past. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On Jan 5, 2007, at 6:25 PM, Mark Baker wrote: > This is a FAQ, Mike. I'm sure a search for "query" in the archives > will turn up a gold mine of pros/cons. > > If you could put what you find on the RESTwiki (the FAQ page in > particular), it would be appreciated. > > Cheers, > > Mark. > > On 1/5/07, Mike Schinkel <mikeschinkel@...> wrote: > > I'd like to get the input from everyone on issues regarding URL > structure > > choices for REST-based systems. What are the pros and cons of the > following > > different sets of URLs: > > > > http://www.foo.com/users/ > > http://www.foo.com/users/john-smith/ > > http://www.foo.com/users/john-smith/cell-phone/ > > > > Vs. > > http://www.foo.com/?section=users > > http://www.foo.com/?section=users&user=john-smith > > http://www.foo.com/?section=users&user=john-smith&phone=cell-phone > > > > I'm looking to create an exhaustive list of pros & cons. > > > > Thanks in advance for the help. > > > > -- > > -Mike Schinkel > > http://www.mikeschinkel.com/blogs/ > > http://www.welldesignedurls.org/ > > > > > > > > > > > > Yahoo! Groups Links > > > > > > > > > > -- > Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca > Coactus; Web-inspired integration strategies http://www.coactus.com > >
On 1/5/07, Bill de hOra <bill@...> wrote: > Hugh Winkler wrote: > > Exactly. RosettaNet is a great standard... the apotheosis of this > > style of exchanging full documents. My point is that style is not > > scalable, and that APP is using the same approach, on a smaller scale. > > All the business processing standards I've seen are inherently > non-scalable, if by non-scalable you mean the number of actors that can > share information without resorting to effort in out of band agreements. Yes, that's what I mean. > That XForms has a means of exception management (show it to a person) > and does schema validation in advance via constraints instead of > defensive programming doesn't have me thinking it's any more applicable > or valuable than using Atom - indeed it has me thinking it's going to be > riddled with security holes because to honor an xform constraint you > need a trustworthy client. I am wide open to suggestions that Xforms may not be up to the task. I suggested Xfroms because there is a spec for it, and since it's all about XML, you can make a small leap and declare that the Relax NG and the RFC define the meanings of elements. So you can sidestep opening the semantic web discussion. Constraints just emulate what an HTML form indicates to humans: The HTML form puts one line for authors, not 10, so the browser cannot possibly accept more than a single author to submit to the server. [Side note about exceptions: even in a big machine to machine system, exceptional conditions usually have to percolate their way back to a human for handling. It may just be an error log, and the solution may be... Dang, our purchasing agent/blog syndicator/whatever needs to understand this new term the server at XYZ.com is demanding. ] > > In the general case this a symbolic AI problem. If you want to move > further along, the applicable state of the art here are denotational > semantics as used in KR, RDF interchange, or agent based systems like > FIPA. No-one seems to be sure how to design such things in a way that > they garner wide adoption on existing infrastructure. > Yeah, well I was hoping this discussion would begin the thinking about "wide adoption on existing infrastructure". Because if Xforms isn't up to it, or if RDF forms cannot be brought along far enough, to handle a simple little web service like APP, then we'll never see web services adopted as broadly as the web -- which the purpose of a lot of discussion on this list. My thinking is that APP is bite-sized enough to where some small unambitious forms language could lead the way... "microforms". I don't think it's going to take an Apollo project and a bunch of egghead professors to start this work... just another little brushfire of web innovation. Hugh
On Jan 5, 2007, at 4:56 AM, Benjamin Carlyle wrote: > On Thu, 2007-01-04 at 16:56 -0800, Bill Venners wrote: >> I did take the canonical URL thing to heart in our new architecture. >> It may be a bit over-designed but we generate controllers that >> canonicalize by dropping trailing slashes and even reordering the >> query parameters. For example: >> http://www.artima.com/articles?t=java&p=4&o=a >> http://www.artima.com/articles?p=4&o=a&t=java > ... >> http://www.artima.com/articles/?o=a&t=java&p=4 >> All get redirected to: >> http://www.artima.com/articles?o=a&t=java&p=4 > > I'm not a fan of redirection for redirection's sake. If it makes for a > much simpler server implementation so be it, but in general I think it > is poor manners to ask a client to rephrase a request the server > understands perfectly well. > Well, in my case it makes the server a bit more complicated, not simpler. By default using Java servlets my server would process all those URIs the same. I had to add code to canonicalize those URIs into one URI via redirection. I realized I do one other thing that I didn't mention. If a URI comes in with a query param at its default value, I redirect to the same URI minus that query param. I didn't do this for redirection's sake, but for the purpose of helping search engines figure out that two different URIs really refer to the same thing. If the search engine can't figure out that two different URIs refer to the same thing, and half of the inbound links use each URI, then Google-like PageRank algos won't put that page as high in search results as it would have with a canonical URI. So I invested in some code to canonicalize URIs that my infrastructure didn't already canonicalize, in the hopes of a pay back in increased traffic from search engines. > In all of these cases I would be strongly inclined to process the > request normally and repond with the relevant content. I would try to > include a link in the content and/or headers to the bookmark or > "permalink" url for future reference, but I don't want to clog up the > network with repetitions of the same request through the redirection > mechanism. I see redirection primarily as a mechanism to defer to a > server entity that understands the request properly or to deal with a > deprecated url. > Since all non-canonical URIs are redirected, it should be highly rare that someone would ever link to one of them. So oddly enough by doing the redirects, I will actually be preventing people from linking to a non-canonical URI, in which case the server would never need to do any redirects. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com > Roy, you wrote: >> No. Why would I have said "For two resources to be the same, ..." if >> I thought that changing a URI always resulted in different resources? >> They might be different resources, the client generally won't be able >> to figure that out, so the only safe assumption is that they are >> different resources until stated otherwise by the server. > > I worry about the notion of a resource being too airy-fairy. I take a > fairly simple approach in my conversation. If the urls are > different we > are talking about two different resources. It might happen that they > demarcate the same application state, but client's can't know that. I > would suggest that even server's don't really know that. A mapping > from > URL to representation might be here today but gone after the > restructure > tomorrow. One of the aliases might become a redirection next week. > Severs cannot say with certainty over the long-haul that multiple urls > map to the same application state. In that context I am very wary of > language that talks about two resources being the "same", even if the > might happen to be equivalent for the forseeable future. > > I talk about different urls always referring to different > resources. At > any particular point in time the url demarcates a specific subset of > application state, though the server is free to change the mapping > from > url to application state over time. Interacting with the resource > returns, replaces, adds to, destroys, or otherwise operates on the > demarcated application state by transferring representations of the > state being communicated. Different representations encode the state > being communicated into different document types with different > tradeoffs with respect to fidelity of semantics, document simplicity, > generality of applicability and other design factors. > > Benjamin. > >
Nic James Ferrier wrote: > John Panzer <jpanzer@...> writes: > > >> Authentication is a very challenging topic and I think it's one that is >> a going to be a gating factor in deployment of more sophisticated web >> services of all kinds. Note that the most used web services today don't >> require authentication, and I think that's partly because there isn't a >> really good answer for this right now. Any suggestions? >> > > I can see where you're coming from but I can't fully agree. > > RESTfull authentication is possible (I'm doing it... my app will be > announced here this week!) > Nic -- any update on your app? I'm interested to see how you're handling these issues. -- Abstractioneer <http://feeds.feedburner.com/aol/SzHO>John Panzer System Architect http://abstractioneer.org
: I worry about the notion of a resource being too airy-fairy. I take a : fairly simple approach in my conversation. If the urls are different we : are talking about two different resources. It might happen that they : demarcate the same application state, but client's can't know that. I : would suggest that even server's don't really know that. A mapping from : URL to representation might be here today but gone after the restructure : tomorrow. One of the aliases might become a redirection next week. : Severs cannot say with certainty over the long-haul that multiple urls : map to the same application state. In that context I am very wary of : language that talks about two resources being the "same", even if the : might happen to be equivalent for the forseeable future. Problem is, without that "Sameness", there is no notion of identity at all, and no resource. Everything is itself and perhaps something and everything else from time to time.* Who can live in a universe like that? You can't convict a murderer, because it was never him; it was someone else who happened to "demarcate the same state"... on that particular day. * Made me think of "I am the Walrus" :-) Walden
Hi Walden, On Jan 5, 2007, at 4:05 PM, Walden Mathews wrote: > : It is difficult to talk about these things. My intent was to include > : the possibility of content negotiation in my question. A more > pointed > : question, then, would be: By Roy's definition of resource as > mapping, > : is it true that: > : > : http://www.artima.com/articles > : and > : http://www.artimal.com/articles?p=4 > : > : Must at any time t return the same representation given the same > : request headers to refer to the same resource? > : > > If you're sure you've locked out all variation by that approach, then > the answer should be yes. But I don't know for sure that content > negotiation is deterministic. > > A safer way to go is to pretend that you have the all-powerful client > that is able to fetch all possible representations from both URI at > all > times. Then for all times t, the set of reps from each URI must > match. That's as precise as I can say it without resorting to > formalisms. > OK. Thanks for the explanation. I think this is the definition from Roy's thesis: More precisely, a resource R is a temporally varying membership function MR(t), which for time t maps to a set of entities, or values, which are equivalent. The values in the set may be resource representations and/or resource identifiers. A resource can map to the empty set, which allows references to be made to a concept before any realization of that concept exists -- a notion that was foreign to most hypertext systems prior to the Web [61]. Some resources are static in the sense that, when examined at any time after their creation, they always correspond to the same value set. Others have a high degree of variance in their value over time. The only thing that is required to be static for a resource is the semantics of the mapping, since the semantics is what distinguishes one resource from another. So at any time t, the values in the set must be the same for any two URIs that claim to reference the same resource. Therefore page four and page one of my articles list are two different resources as the term resource is used in REST. And therefore I think that given Steve's "REST for the rest of us" article is attempting to explain REST, he should perhaps not say things like: GET /resource Retrieve the entire resource. Query parameters may be available to retrieve only parts of the resource. For lack of a better term, and since "resource," "entity" and "representation" are taken, I might call the conceptual thing an "object" and its query-param-accessed variations "views." /articles refers to a conceptual collection of articles object at Artima, and you can get different views of that object by adding query parameters, such as /articles?p=4 or /articles?t=java. Each one of those views is a different resource in REST terms. Bill
: It is difficult to talk about these things. My intent was to include : the possibility of content negotiation in my question. A more pointed : question, then, would be: By Roy's definition of resource as mapping, : is it true that: : : http://www.artima.com/articles : and : http://www.artimal.com/articles?p=4 : : Must at any time t return the same representation given the same : request headers to refer to the same resource? : If you're sure you've locked out all variation by that approach, then the answer should be yes. But I don't know for sure that content negotiation is deterministic. A safer way to go is to pretend that you have the all-powerful client that is able to fetch all possible representations from both URI at all times. Then for all times t, the set of reps from each URI must match. That's as precise as I can say it without resorting to formalisms. Walden
Hmm, that's been there more than two years - I pointed it out (and two other
situations in Google's Alert system) as part of an article back in 2004. I
honestly thought they'd fix that - not necessarily because of that article,
but because it's just a bug.
http://www.xml.com/pub/a/2004/12/01/pubsub.html
> -----Original Message-----
> From: rest-discuss@yahoogroups.com
> [mailto:rest-discuss@yahoogroups.com] On Behalf Of Alan Dean
> Sent: Friday, January 05, 2007 6:02 AM
> To: rest-discuss@yahoogroups.com
> Subject: Re: [rest-discuss] An apocryphal example of why GET
> should be safe ...
>
> On 12/22/06, Hugh Winkler <hughw@...> wrote:
> > That darn Google... breaking the web again.
>
> In a fit of irony, even Google is guilty! See the 'delete'
> link on the Google Alerts management console...
>
> http://www.google.com/alerts/remove?s={token}
>
>
>
> Yahoo! Groups Links
>
>
>
Bill Venners wrote: > I am trying to understand the definition of resource in the HTTP and > REST context, that's all. I find it useful to work in a Copenhagen-like model in which there is no resource, and no need to define it. All we can observe are the URIs and representations. Based on these we can predict server behavior without ever understanding what a resource really is. Unlike quantum mechanics, it's not true that there really aren't any hidden variables, but it's very useful to act as if there aren't any. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Hi Rusty, On Jan 6, 2007, at 7:33 AM, Elliotte Harold wrote: > Bill Venners wrote: > >> I am trying to understand the definition of resource in the HTTP >> and REST context, that's all. > > I find it useful to work in a Copenhagen-like model in which there > is no resource, and no need to define it. All we can observe are > the URIs and representations. Based on these we can predict server > behavior without ever understanding what a resource really is. > Unlike quantum mechanics, it's not true that there really aren't > any hidden variables, but it's very useful to act as if there > aren't any. > I think the way Roy defined resource in his thesis was as the mapping over time of the values or entities, the actual data returned. I thought that made sense, and it really is something. The resource is that mapping over time. He doesn't define it as the conceptual thing behind the curtain that whose representations result in that mapping. It *is* that mapping. Bill
On Fri, 2007-01-05 at 14:41 -0600, Hugh Winkler wrote: > On 1/5/07, Bill de hOra <bill@...> wrote: > > Hugh Winkler wrote: > > > Exactly. RosettaNet is a great standard... the apotheosis of this > > > style of exchanging full documents. My point is that style is not > > > scalable, and that APP is using the same approach, on a smaller > scale. > > All the business processing standards I've seen are inherently > > non-scalable, if by non-scalable you mean the number of actors that > can > > share information without resorting to effort in out of band > agreements. > Yes, that's what I mean. ... > Yeah, well I was hoping this discussion would begin the thinking about > "wide adoption on existing infrastructure". Because if Xforms isn't > up to it, or if RDF forms cannot be brought along far enough, to > handle a simple little web service like APP, then we'll never see web > services adopted as broadly as the web -- which the purpose of a lot > of discussion on this list. But again, it isn't a matter of the protocol being up to it or not. It is about whether the client is up to it. The thing is that you can't communicate without human-level agreement. This applies to protocols built around document standards in which the agreement is encoded directly into client and server. It also applies to protocols built around forms. In the forms case the server provides enough information in the form such that a human-level intelligence can figure out what is meant by all of the fields and attempt to fill them in correctly. The form is a user inteface, and the same kind of human intelligence needs to be applied to any filling in or manipulating any user interface. The terminolgy has to meet basic criteria of collective understanding even with human-level intelligence. The human must know what an author is in order to fill out the author field, and map it first to the concept of "well, that's me" and then to their own name as they would like it to appear. They might have to take a guess at how the information will be used to know what form of their name you want. They may have to refer to a help page to understand all of the necessary intricacies involved. So I think you really want to to ask are the dual queries of "How to we prevent technology issues getting in the way of human-level agreement?" and "How do we best achieve human-level agreement in a world of subcultures and individuals?". The machine document-exchange style relies on agreement being made before the communication. The forms-exchange style relies on agreement being made with a human in the loop during the conversation as they determine how to fill out the form in the way the form provider intended. An incorrectly filled-out form is a failure to agree. A correctly filled-out form is the result of agreement via the form monalogue being understood sufficiently well by the form user. I think there is a sliding scale to this. Will all atom publication use atompub? Of course not. Most servers will allow users to submit new articles online using their web browser. However, when there is no human in the loop at the time of submission or it is difficult to engage the user in the communications process you need prior agreement. Consider the program that allowed a user to write their article offline. Now the client has atom-equivalent data to submit. It doesn't want to fill out a form, and can do very little to have the user submit different content if the server fails to accept the submission. You need good agreement where you want documents to be understood for decades as well. After all, when human agreement is required to be sustained for a long time it also takes a long time to come to a point of agreement. Within the machine document-exchange style there is also a sliding scale. I could agree between my client software and your server software on a document format. This wouldn't require standardisation over a large user base, although we are still defining a standard. This would be our own vocabulary not understood by anyone else. Essentially, our network effects would be limited to the size of the network between you and me. On the other end of the scale are your htmls, atoms and other widely-agreed document formats. These require a lot more effort to agree, and more effort to change once the initial agreement has come about. They will be understood for a long time by a large number of people. They won't be costly to roll out into new applications. But then again, they are limited in the set of problems they can solve. They can only solve problems that affect a majority or significant minority of the standard's contributors. If we constrast this scale to the development of human language we can see that humans generally know a small number of general purpose languages that can solve common problems. Uncommon problems tend to require specialised vocaulary. Various sciences and technology problem domains will often have specific jargon that allows uncommon problems to be solved. I think that the way we define documents for machines will eventually follow this kind of trend. The jargon could be defined as part of problem-specific document standards or as adjuncts to more general standards. Perhaps our common language for expressing client/server interaction is HTML, but within html we might define a special language (hCalendar) for expressing information in the time and date problem domain. Perhaps revealingly, this kind of mixed vocabulary set can also be combined with form submission for very special purpose problem domains that might apply only to a single server on the Internet. These problem domains afford the least automation and require the greatest application of human-level intelligence... but a speical problem domain is a special problem domain. I think an important focus for evolving the general concept of document definition needs to be built around allowing not only evolution of the global concepts, but inclusion of vocabularies that may be problem-specific. The main reason that this is difficult is because over time our perspective on the vocabularies may change. What was once standard might one day be shelved as a special vocabulary. What was once specal may become part of a standard vocabulary. This process can be difficult when various special vocabularies overlap or conflict in any way. I think forms are important. Don't get me wrong. However, I think that the basic approach of using documents to pass information between machines will be an important one for the forseeable future. These documents must solve most problems with a general vocabulary, but may still require special vocabularies to be built on top of the general and combined in interesting ways. If this makes me sound like I am on the RDF bandwagon, well.. I'm yet to be sold. I'm not convinced RDF solves the evolution problem as particular terms change in meaning or empahsis over time. I'm pretty sure I don't want my documents to have to define ten namespaces at the top and make sure I select terms from the appropriate namespace for every element in the file. Perhaps the problem is as simple as placing a <problem-domain> marker into atom for any special problem domain, and allowing its sub-elements to float in and out of the main atom feed and entry elements over subsequent atom versions. Perhaps a special <vendor> elment would make sense to collect anything specific to a particular application. The important thing would be to allow these special vocabularies to evolve separately to atom proper as required. Benjamin.
On Fri, 2007-01-05 at 19:15 -0500, Walden Mathews wrote: > : I worry about the notion of a resource being too airy-fairy. I take a > : fairly simple approach in my conversation. If the urls are different we > : are talking about two different resources. It might happen that they > : demarcate the same application state, but client's can't know that. I > : would suggest that even server's don't really know that. A mapping from > : URL to representation might be here today but gone after the restructure > : tomorrow. One of the aliases might become a redirection next week. > : Severs cannot say with certainty over the long-haul that multiple urls > : map to the same application state. In that context I am very wary of > : language that talks about two resources being the "same", even if the > : might happen to be equivalent for the forseeable future. > Problem is, without that "Sameness", there is no notion of identity > at all, and no resource. Everything is itself and perhaps something and > everything else from time to time.* Who can live in a universe like > that? You can't convict a murderer, because it was never him; it was > someone else who happened to "demarcate the same state"... > on that particular day. Well, you can't convict a resource. However you can put a proxy between the resource and those who it might harm. The proxy could match requests destined for http://example.com/thecriminal and treat them specially, for example by rejecting them outright. If correspondance was still possible via http://example.com/thecriminal? you would need to match that too, but you might also need to match http://example.com/thecriminalsbrother who is colluding with the criminal to allow message exchanges to continue. Are the criminal and the criminal's brother the same resource? No, though they are equivalent in this context for the problem the proxy is trying to solve. Are http://example.com/thecriminal and http://example.com/thecriminal? the same resource? I would say no. One day the second resource might be disabled when the bug that caused the urls to be handled equivalently is fixed. Then they will no longer necessarily require identical treatment. I don't go for the sameness of resources or urls. I think sameness is a concept that only really applies to urns in spaces where a single urn couldn't be specified for technical reasons. Even then, if they are equivalent... what purposes are they equivalent for? Can you guarantee that they will be equivalent for every purpose I can conceive of? Same is too loose a concept for me. I can only really define it one way: Is the identification either identical or defined by the scheme to be equivalent? So in answer to your question, a resource identified by a particular url is the same as itself. That is all we need for the purposes of identity. For other purposes we need more problem-specific concepts of equvalence to make statements that can be understood. Benjamin.
Benjamin, In systems design, it is important to collapse concepts which within the system have no real distinction. We refactor code, establish asias lookup schemes, and probably do a whole raft of other things to factor out redundancy noise in the interest of simplicity and clarity. Obsolete stuff should get garbage collected. In the system you propose, that can't happen. You are making identifiers too important, IMO. More important than what they identify. Walden ----- Original Message ----- From: "Benjamin Carlyle" <benjamincarlyle@...> To: "Walden Mathews" <waldenm@...> Cc: "Bill Venners" <bv-svp@...>; "Roy T. Fielding" <fielding@...>; "Mike Schinkel" <mikeschinkel@...>; <rest-discuss@yahoogroups.com> Sent: Saturday, January 06, 2007 11:52 PM Subject: Re: [rest-discuss] Re: Request for feedback: REST for the Rest of Us : On Fri, 2007-01-05 at 19:15 -0500, Walden Mathews wrote: : > : I worry about the notion of a resource being too airy-fairy. I take a : > : fairly simple approach in my conversation. If the urls are different we : > : are talking about two different resources. It might happen that they : > : demarcate the same application state, but client's can't know that. I : > : would suggest that even server's don't really know that. A mapping from : > : URL to representation might be here today but gone after the restructure : > : tomorrow. One of the aliases might become a redirection next week. : > : Severs cannot say with certainty over the long-haul that multiple urls : > : map to the same application state. In that context I am very wary of : > : language that talks about two resources being the "same", even if the : > : might happen to be equivalent for the forseeable future. : > Problem is, without that "Sameness", there is no notion of identity : > at all, and no resource. Everything is itself and perhaps something and : > everything else from time to time.* Who can live in a universe like : > that? You can't convict a murderer, because it was never him; it was : > someone else who happened to "demarcate the same state"... : > on that particular day. : : Well, you can't convict a resource. However you can put a proxy between : the resource and those who it might harm. The proxy could match requests : destined for http://example.com/thecriminal and treat them specially, : for example by rejecting them outright. If correspondance was still : possible via http://example.com/thecriminal? you would need to match : that too, but you might also need to match : http://example.com/thecriminalsbrother who is colluding with the : criminal to allow message exchanges to continue. Are the criminal and : the criminal's brother the same resource? No, though they are equivalent : in this context for the problem the proxy is trying to solve. Are : http://example.com/thecriminal and http://example.com/thecriminal? the : same resource? I would say no. One day the second resource might be : disabled when the bug that caused the urls to be handled equivalently is : fixed. Then they will no longer necessarily require identical treatment. : : I don't go for the sameness of resources or urls. I think sameness is a : concept that only really applies to urns in spaces where a single urn : couldn't be specified for technical reasons. Even then, if they are : equivalent... what purposes are they equivalent for? Can you guarantee : that they will be equivalent for every purpose I can conceive of? Same : is too loose a concept for me. I can only really define it one way: Is : the identification either identical or defined by the scheme to be : equivalent? : : So in answer to your question, a resource identified by a particular url : is the same as itself. That is all we need for the purposes of identity. : For other purposes we need more problem-specific concepts of equvalence : to make statements that can be understood. : : Benjamin. : : : __________ NOD32 1960 (20070106) Information __________ : : This message was checked by NOD32 antivirus system. : http://www.eset.com : :
Roy T. Fielding wrote: > Canonical URLs means that new links are created to the > same URL asprevious links, which increases linked-to values, > which increases both the perceived node value of a hub and > the corresponding values of the nodes that link to that hub. What are the tangible places that where having a single canoncial URL provides benefits? -- Search engine ranking -- Router and Proxy caches -- Anywhere else? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
On Jan 7, 2007, at 10:00 PM, Mike Schinkel wrote: > Roy T. Fielding wrote: >> Canonical URLs means that new links are created to the >> same URL asprevious links, which increases linked-to values, >> which increases both the perceived node value of a hub and >> the corresponding values of the nodes that link to that hub. > > What are the tangible places that where having a single canoncial URL > provides benefits? > > -- Search engine ranking > -- Router and Proxy caches > -- Anywhere else? Think differently: The overall value of the graph of resources and links increases as hub value increases. It is not a technical or application question but how useful the overall graph is. Jan > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org/ > > > > > Yahoo! Groups Links > > >
Jan Algermissen wrote: > > What are the tangible places that where having a single > canoncial URL > > provides benefits? > > > > -- Search engine ranking > > -- Router and Proxy caches > > -- Anywhere else? > > Think differently: > > The overall value of the graph of resources and links > increases as hub value increases. > It is not a technical or application question but how useful > the overall graph is. Well, that answer effectively short circuits what I was trying to analyze. :-( The research I am doing is telling me there are benefits to have multiple URLs, so the requirement to force everything to a canonical URL limits the ability to pursue those other benefits. I asked the question to determine the affected scope in hopes of identifying alternate methods to address the real concerns. But if I am only allowed to view as an abstraction, I won't be able to consider tangible alternate solutions. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ P.S. Interestingly, a lot of web principles are based on ideals yet those ideals are often violated for many different practical reasons. Would be nice if we could find ways that those ideals would not need to be violated....
On Sun, 2007-01-07 at 10:44 -0500, Walden Mathews wrote: > : So in answer to your question, a resource identified by a particular url > : is the same as itself. That is all we need for the purposes of identity. > : For other purposes we need more problem-specific concepts of equvalence > : to make statements that can be understood. > In systems design, it is important to collapse concepts which > within the system have no real distinction. We refactor code, > establish asias lookup schemes, and probably do a whole > raft of other things to factor out redundancy noise in the interest > of simplicity and clarity. > > Obsolete stuff should get garbage collected. In the system > you propose, that can't happen. You are making identifiers too > important, IMO. More important than what they identify. Would you mind elaborating? I'm not sure I understand what you are intending to say. What I am suggesting is that different urls are always different identifiers, even when they identify loosely- or closely- related resources. I see this suggestion as a practical breakdown of the system. No node in the system can sustainably claim equivalence of two urls except as defined by scheme without adding "for my purposes". The server can talk about the sameness of two urls for the purpose of causing them to use the same processing for requests. That is ephemeral. The authentication proxy can talk about sameness of two urls for the purposes of access control. That too is emphemeral and is a distinct concept. A client can talk about the sameness of urls for the purposes of its usage. Same is a short-lived perspective-specific notion. Even the notion of an identified resource being the same over time depends on the context in which the word "same" is used. I think you are suggesting that a eternity of association between url and resource is too much. I'm not sure things are so simple. URLs can be deprecated and fade into obscurity, then be brought back to life in a different form. Perhaps the resource at the url is no longer the same from particular perspectives or even from any perspective, but the value in calling it a different resource is probably minimal at that point. Certainly, breaking a sameness assumption about a resource anywhere in the system will have negative effects on the nodes that rely on that sameness. It is sometimes hard to judge what those effects will be. It is sometimes hard to gauge what assumptions are being made in the wild. So, "same" for the purpose of identity and "same" for other purposes don't have to be concepts that line up. The identified resource is the same when the same url is used. We know that. Past that point we can't predict what other kinds of sameness will be applicable. We can only rely on social contract with the resource provider that obvious sameness is left intact. Social contracts can last for a long time, but we can't rule out the possibility that one day the http://google.com resources won't be sold out to a goggle retailer and change meanings for most useful purposes. So I do suggest that a url identifies a particular resource, and that other urls identify different resources. The mapping of that resource to representations can change over time to break all of our sameness assumptions if it wants to... but I suggest that sameness of identification is the only impartial and absolute way to express the sameness concept. A resource is many things to many people. Is <http://google.com.au/> the same as <http://google.com/>? It is for me, but likely not for you. It is difficult to carry on a conversation about which urls identify the same resource if a resource is simply "what you make of it" from any appropriate perspective. Take the google example again. To me they both express the concept of "google's search page", however to someone from abroad a different set of urls would fall into the set. It is difficult to talk about which urls identify different resources. I see identity as the rock of vocabulary that allows unambigous conversations to be had about what resources are and how we should interact with them. You can then move on to definitions of resources that help express the meaning of the classic HTTP methods: * A resource is a selection of application state that can be operated on by verbs such as the HTTP methods. Different resources may select or demarcate the same or overlapping application state. * The selection is expressed as an identifier. The server determines what mapping should apply from this identifier its objects, tables, or other accessible data or state at all times. This mapping is subject to change according to the server's requirements which will partially reflect the requirements of its clients. * GET requests a representation of the selected application state in one of possibly several representations. Different representations will likely retain different levels of semantic fidelity of the actual application state. * PUT replaces the selected application state with the state transferred in the request's representation * POST adds the state transferred in the request's representation to the server's state. * DELETE is a PUT of the null state to the resource * The server may handle the requests in a number of ways that don't exactly match the request. Business logic can come into effect to have arbitrary knock-on consequences, including those that leave the identified resource in a different state to that intended by the method. Some consequences can be communicated back to the client as part of the request, for example a "created" or "no content" response can be returned to a POST request. This set of definitions defines the resource concept in a different way to REST, and I think forms a subset of theoretical REST that can form the basis of discussion about which methods a particular architecture should have. It lies somewhere between the real web and REST theory as a practical bridge of good web style. Benjamin.
Benjamin Carlyle wrote: > I see this suggestion as a practical breakdown of the system. No node in > the system can sustainably claim equivalence of two urls except as > defined by scheme without adding "for my purposes". Sure it can. Just like I can say that the identifiers "Norma Jean Mortenson" and "Marilyn Monroe" identify the same person. Of course I can also say that "Marilyn Monroe" and "Marilyn Manson" identify the same person. The fact that a claim that two URIs identify the same resource can be wrong is a different matter to whether they actually can do so.
Benjamin, This is getting way to complicated. Let's clear away some of the distractions and deal with a very simple case: Let's say you have a static page you intend to host forever, and you have one and only one representation you send for that page. You don't honor POST, PUT or DELETE (or any other unsafe method that may appear someday). But, for reasons we don't care about right now, you support two URL's for that page: http://benjamin.com/thepage and http://benjamin.com/page1 Clearly you have two identifiers here. But are you willing to allow that they simply identify the same resource, and so there are not two resources, just one? If not, why? Walden
Hi Mike, On Jan 7, 2007, at 1:47 PM, Mike Schinkel wrote: > The research I am doing is telling me there are benefits to have > multiple > URLs, so the requirement to force everything to a canonical URL > limits the > ability to pursue those other benefits. I asked the question to > determine > the affected scope in hopes of identifying alternate methods to > address the > real concerns. But if I am only allowed to view as an abstraction, > I won't > be able to consider tangible alternate solutions. > Could you elaborate on those "benefits to have multiple URL's"? I'm curious what they are. Thanks. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Bill Venners wrote:
> > The research I am doing is telling me there are benefits to have
> > multiple URLs, so the requirement to force everything to a
> canonical
> > URL limits the ability to pursue those other benefits. I asked the
> > question to determine the affected scope in hopes of identifying
> > alternate methods to address the real concerns. But if I am only
> > allowed to view as an abstraction, I won't be able to consider
> > tangible alternate solutions.
> >
> Could you elaborate on those "benefits to have multiple
> URL's"? I'm curious what they are.
I can give you one easy example, but the other I'm still researching and
will require more background. I plan to cover them on the WDUI Blog[1] when
I have the research complete and I have published enough pre-requisite
material.
But in a nutshell, for the one example the benefits are URL structure that
models the website's structure and contributing to a more usable website and
more directly usable URL structure. Hierarchies are wonder structuring
mechanisms for helping humans conceptualize and navigate complex and/or
large amounts of information. However, there are few true hierarchies; most
are just presented as hierarchies to help humans conceptualize. Heirachy's
can provide websites with a useful organizational structure along with
easy-to-navigate URLs so users can see the URLs as they navigate, and hack
to go up and guess to go down or across. Multiple URLs are needed to reach
the same location with the only difference at the root node being the
breadcrumbs.
I could easily give hundreds of examples (and I do mean hundreds; I expect
the range of potential examples possibly in 5 digits if not 6) of how this
could be used. For one example let's look at a list of representative for
the United States Congress. I'm using URI Templates[1] to describe the
potential URLs. Each of the following points to a representative, but the
drill-down path is different:
http://www.congress.info/house/{state}/{district}/{congress}/
http://www.congress.info/house/{state}/{congress}/{district}/
http://www.congress.info/house/{congress}/{state}/{district}/
http://www.congress.info/{state}/{congress}/house/{district}/
http://www.congress.info/{state}/house/{congress}/{district}/
http://www.congress.info/{state}/house/{district}/{congress}/
http://www.congress.info/{congress}/{state}/house/{district}/
http://www.congress.info/{congress}/house/{state}/{district}/
http://www.congress.info/{congress}/{state}/{district}/
http://www.congress.info/house/{representative}
http://www.congress.info/house/{congress}/{representative}
http://www.congress.info/house/{congress}/{state}/{representative}
http://www.congress.info/house/{state}/{representative}
http://www.congress.info/house/{state}/{district}/{representative}
http://www.congress.info/{state}/{representative}
http://www.congress.info/{state}/{congress}/{representative}
http://www.congress.info/{state}/{district}/{representative}
http://www.congress.info/{state}/house/{congress}/{representative}
http://www.congress.info/{state}/house/{district}/{representative}
http://www.congress.info/{state}/{congress}/house/{representative}
http://www.congress.info/{state}/{district}/house/{representative}
And each of these URLS takes the user same resource given the same values
for the URI Template variables:
http://www.congress.info/senate/{state}/{class}/{congress}/
http://www.congress.info/senate/{state}/{congress}/{class}/
http://www.congress.info/senate/{congress}/{state}/{class}/
http://www.congress.info/{state}/{congress}/senate/{class}/
http://www.congress.info/{state}/senate/{congress}/{class}/
http://www.congress.info/{state}/senate/{class}/{congress}/
http://www.congress.info/{congress}/{state}/senate/{class}/
http://www.congress.info/{congress}/senate/{state}/{class}/
http://www.congress.info/{congress}/{state}/{class}/
http://www.congress.info/senate/{representative}
http://www.congress.info/senate/{congress}/{representative}
http://www.congress.info/senate/{congress}/{state}/{representative}
http://www.congress.info/senate/{state}/{representative}
http://www.congress.info/senate/{state}/{class}/{representative}
http://www.congress.info/{state}/{representative}
http://www.congress.info/{state}/{congress}/{representative}
http://www.congress.info/{state}/{class}/{representative}
http://www.congress.info/{state}/senate/{congress}/{representative}
http://www.congress.info/{state}/senate/{class}/{representative}
http://www.congress.info/{state}/{congress}/senate/{representative}
http://www.congress.info/{state}/{class}/senate/{representative}
The variables:
{congress} - the congress number; every two years there's a new one.
Currently it is 110.
{state} - two character state code.
{district} - the number with a prefix (1st, 2nd, 3rd, etc.) identifying the
district for that state (you'll note this is always subbordinate to {state})
{class} - the class (I, II, or III) for the senator. Every two years there
is a new class and then it repeats at year six. A state has two classes of
senators at any given time. (you'll note this is always subbordinate to
{state})
{representative} - The canonicalized version of the respresentative's name.
Of course these might need a strategy for disambiguation but I ignored that
as it is an irrelevent detail for our discussion.
The above may look overwhelming, but realize that the user would only ever
see one page at a time, and would rarely ever travel down alternate path,
only the path that made sense to them while they were looking for the
information; once they found it why would they continue looking? But with
this it would give them the ability to drill down in another order that made
sense to them.
And just so you can see it in a not abstract form, here are all the
potential drill-down URLs for my congressman:
http://www.congress.info/house/ga/5th/110/
http://www.congress.info/house/ga/110/5th/
http://www.congress.info/house/110/ga/5th/
http://www.congress.info/ga/110/house/5th/
http://www.congress.info/ga/house/110/5th/
http://www.congress.info/ga/house/5th/110/
http://www.congress.info/110/ga/house/5th/
http://www.congress.info/110/house/ga/5th/
http://www.congress.info/110/ga/5th/
http://www.congress.info/house/john-lewis/
http://www.congress.info/house/110/john-lewis/
http://www.congress.info/house/110/ga/john-lewis/
http://www.congress.info/house/ga/john-lewis/
http://www.congress.info/house/ga/5th/john-lewis/
http://www.congress.info/ga/john-lewis/
http://www.congress.info/ga/110/john-lewis/
http://www.congress.info/ga/5th/john-lewis/
http://www.congress.info/ga/house/110/john-lewis/
http://www.congress.info/ga/house/5th/john-lewis/
http://www.congress.info/ga/110/house/john-lewis/
http://www.congress.info/ga/5th/house/john-lewis/
If, as a typical web user I was drilling down any of those URLs to find John
Lewis' page and was redirected me to the "canonical" URL it would be totally
disorienting and a bad UI IMO. And which one would be canonical anyway?
Clearly the "only one canonical URL" guidance isn't followed 100%, and I
believe that is because the guidance isn't flexible enough to allow
alternately beneficial use cases. So I believe we should be able to question
the guidance of because the guidance really is an implementation detail.
Instead we should be able to look more abstractly at the problem and see if
we can achieve the outcomes the guidance seeks to achieve using alternate
means. Note I didn't say "ignore" the guidance, I said "question it" in
hopes to find a better solution for future guidance.
I have several different ways to potentially address these concerns, but I
haven't researched them enough so I let those wait. Besides, this email is
far too long already. ;-)
======
So... now that I've (at least attempted) to answer your question, can you
help me with mine? What are some of the benefits of Canonical URLs besides
caching and search engine optimization? And you could even elaborate on the
benefits to caching and search engine optimization too. Thanks in advance.
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org/
[1] http://www.ietf.org/internet-drafts/draft-gregorio-uritemplate-00.txt
On 1/8/07, Mike Schinkel <mikeschinkel@...> wrote: > And you could even elaborate on the benefits to caching and search engine optimization too. Thanks in advance. Nice articles by Mark Nottingham: http://www.mnot.net/cache_docs/ http://www.mnot.net/blog/2005/11/26/caching GWA is a popularization of caches: http://google.blognewschannel.com/index.php/archives/2005/05/04/google-web-accelerator/ Squid's a free proxy/cache, so if you google it you might find some useful remarks on caching there. I guess Akamai's still going... I'm a little out of touch . Hugh
Mike, : But in a nutshell, for the one example the benefits are URL structure that : models the website's structure and contributing to a more usable website and : more directly usable URL structure. It's not clear who reaps these "benefits". But I think you are trying to design a web in which clients learn you rules of hierarchy and then navigate your site by synthesizing URL's by following the rules. Meanwhile, URIs are supposed to be opaque from the client angle, and it should by hypertext that is guiding them along the path from one resource to the next. Have you read up much on these concepts? Walden
Hi Walden, On Jan 8, 2007, at 4:14 PM, Walden Mathews wrote: > Mike, > > : But in a nutshell, for the one example the benefits are URL > structure that > : models the website's structure and contributing to a more usable > website > and > : more directly usable URL structure. > > It's not clear who reaps these "benefits". But I think you are > trying to design a web in which clients learn you rules of hierarchy > and then navigate your site by synthesizing URL's by following > the rules. > > Meanwhile, URIs are supposed to be opaque from the client > angle, and it should by hypertext that is guiding them along the > path from one resource to the next. Have you read up much on > these concepts? > URIs are supposed to be opaque from the client program perspective, but may not be opaque from the client user perspective. People may use the URI to glean information about the information architecture of the site. Jakob Nielsen talks about this topic here: http://www.useit.com/alertbox/990321.html In observing my own behavior while using other people's web sites, I noticed I occasionally find myself hacking off pieces of a URI in the hopes of finding something conceptually higher up in the hierarchy. It usually didn't work, but I tried. I did not add things to URIs, but I did try to subtract things. I want URI hacking that to always work at the websites whose URIs and information architecture I design. Where I depart from Mike's approach is that I would still pick one hierarchy from his many possibilities, and have one canonical URI for each resource. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Hi Mike, On Jan 8, 2007, at 11:22 AM, Mike Schinkel wrote: > Bill Venners wrote: >>> The research I am doing is telling me there are benefits to have >>> multiple URLs, so the requirement to force everything to a >> canonical >>> URL limits the ability to pursue those other benefits. I asked the >>> question to determine the affected scope in hopes of identifying >>> alternate methods to address the real concerns. But if I am only >>> allowed to view as an abstraction, I won't be able to consider >>> tangible alternate solutions. >>> >> Could you elaborate on those "benefits to have multiple >> URL's"? I'm curious what they are. > > I can give you one easy example, but the other I'm still > researching and > will require more background. I plan to cover them on the WDUI Blog > [1] when > I have the research complete and I have published enough pre-requisite > material. > > But in a nutshell, for the one example the benefits are URL > structure that > models the website's structure and contributing to a more usable > website and > more directly usable URL structure. Hierarchies are wonder > structuring > mechanisms for helping humans conceptualize and navigate complex > and/or > large amounts of information. However, there are few true > hierarchies; most > are just presented as hierarchies to help humans conceptualize. > Heirachy's > can provide websites with a useful organizational structure along with > easy-to-navigate URLs so users can see the URLs as they navigate, > and hack > to go up and guess to go down or across. Multiple URLs are needed > to reach > the same location with the only difference at the root node being the > breadcrumbs. > I think I understand. Design is about making tradeoffs, and there are several of them here. First of all, as I think you've pointed out you could have all of your various hierarchies to navigate down, but at the end when you get to the endpoint, you could redirect to a canonical form. If you don't have canonical URIs, then you need not jolt the user by that redirect at the end, which may be a usability plus, but you may make it harder to find the page via search engines, which is a usability minus (and probably a business minus for your website). It is also a caching minus, because the cache won't know it already has something that came via a different URI. That can be a usability minus in the form of slower perceived response time, and a business minus in terms of higher bandwidth costs. As a result, I'd probably lean towards having canonical URIs. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Bill, : URIs are supposed to be opaque from the client program perspective, : but may not be opaque from the client user perspective. People may : use the URI to glean information about the information architecture : of the site. Jakob Nielsen talks about this topic here: : : http://www.useit.com/alertbox/990321.html Webarch [1] is pretty clear on the topic, though. Agents should not infer properties from the URI, and "agents" means both people and machines. [1] http://www.w3.org/TR/webarch/#uri-opacity : : In observing my own behavior while using other people's web sites, I : noticed I occasionally find myself hacking off pieces of a URI in the : hopes of finding something conceptually higher up in the hierarchy. : It usually didn't work, but I tried. I did not add things to URIs, : but I did try to subtract things. I want URI hacking that to always : work at the websites whose URIs and information architecture I : design. Where I depart from Mike's approach is that I would still : pick one hierarchy from his many possibilities, and have one : canonical URI for each resource. Why design to such a limited audience? Why not improve the representations (i.e., linking) instead? Walden
On Jan 8, 2007, at 3:58 AM, Benjamin Carlyle wrote: > I see identity as the rock of vocabulary that allows unambigous > conversations to be had about what resources are and how we should > interact with them. You can then move on to definitions of resources > that help express the meaning of the classic HTTP methods: ... > This set of definitions defines the resource concept in a different > way > to REST, and I think forms a subset of theoretical REST that can form > the basis of discussion about which methods a particular architecture > should have. It lies somewhere between the real web and REST theory > as a > practical bridge of good web style. Perhaps you should consider what the purpose of these definitions may be, and what it is you are trying to describe. REST is a model of an idealized Web application that attempts to maximize a particular set of properties that I consider to be the "most important properties" of the Web as a whole. Its purpose is to guide architecture decisions in light of that model, so that I can avoid breaking the important bits while developing the architecture and so that developers can understand the important bits when developing applications. The problem with your model is that it doesn't respect reality. By axiomatizing away the notion of resource equivalence you simplify the model, but then your model is incapable of explaining the information theoretic properties of the Web that I just finished describing. In your model, URI aliases are not an issue because they don't exist. In the REST model, URI aliases are an issue because they reduce the perceived importance of a given resource and reduce the efficiency of caching resource representations. REST teaches the architect that reducing aliases will result in a more efficient system, whereas your model just assumes such a reduction is impossible. Google search ranking and duplicate result presentation emphasize the same characteristics as the REST model because those are characteristics of the information space as we know it (the Web). And you say that your model is somewhere between the "real Web" and REST theory? Wrong. The Web is a lot more than the sum of client-server interaction. So, the question then isn't how you might define resources. The question is what do you intend to accomplish by doing so? There are many ways to look at any given system, particularly when focusing on only one of the components. An abstraction like REST is supposed to help the designer identify mismatches in the architecture. Perhaps you don't see the problem because you aren't applying your model to the Web as a whole, but rather something more limited (such as a server-side development framework)? ....Roy
Hi Walden, On Jan 8, 2007, at 5:37 PM, Walden Mathews wrote: > Bill, > > : URIs are supposed to be opaque from the client program perspective, > : but may not be opaque from the client user perspective. People may > : use the URI to glean information about the information architecture > : of the site. Jakob Nielsen talks about this topic here: > : > : http://www.useit.com/alertbox/990321.html > > Webarch [1] is pretty clear on the topic, though. Agents should > not infer properties from the URI, and "agents" means both people > and machines. > Hmm. You're right that that document defines agents as both software and people, but I suspect most web users haven't read that document and may not know to not infer anything from URIs. My belief is that people do use the URI for clues as to information architecture, and therefore I think it is appropriate to design URIs with that in mind. > [1] http://www.w3.org/TR/webarch/#uri-opacity > > : > : In observing my own behavior while using other people's web sites, I > : noticed I occasionally find myself hacking off pieces of a URI in > the > : hopes of finding something conceptually higher up in the hierarchy. > : It usually didn't work, but I tried. I did not add things to URIs, > : but I did try to subtract things. I want URI hacking that to always > : work at the websites whose URIs and information architecture I > : design. Where I depart from Mike's approach is that I would still > : pick one hierarchy from his many possibilities, and have one > : canonical URI for each resource. > > Why design to such a limited audience? Why not improve the > representations (i.e., linking) instead? > Mike can do that too, but all those things need to be at some URI each, so why not select a canonical URI for each resource that may help users figure out where they are in his information architecture. Users aren't depending on a URI meaning what they guess it means. They just try it and see if the representation matched their expectations. If it doesn't, they will quickly try something else. It is the representation that they ultimately depend on. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
On Jan 8, 2007, at 2:18 PM, Hugh Winkler wrote: > On 1/8/07, Mike Schinkel <mikeschinkel@...> wrote: > >> And you could even elaborate on the > benefits to caching and search engine optimization too. Thanks in > advance. > > Nice articles by Mark Nottingham: > > http://www.mnot.net/cache_docs/ > This doc says: must-revalidate tells caches that they must obey any freshness information you give them about a representation. HTTP allows caches to serve stale representations under special conditions; by specifying this header, youre telling the cache that you want it to strictly follow your rules. Could someone enlighten me, or point me to such enlightenment, what the special conditions are that a cache could serve stale data, and why I'd want to say must-revalidate to prevent this? Also, when might I want to say proxy-revalidate instead? Thanks. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
On 1/8/07, Walden Mathews <waldenm@...> wrote: > Bill, > > : URIs are supposed to be opaque from the client program perspective, > : but may not be opaque from the client user perspective. People may > : use the URI to glean information about the information architecture > : of the site. Jakob Nielsen talks about this topic here: > : > : http://www.useit.com/alertbox/990321.html > > Webarch [1] is pretty clear on the topic, though. Agents should > not infer properties from the URI, and "agents" means both people > and machines. > > [1] http://www.w3.org/TR/webarch/#uri-opacity And if I understand Roy[1] correctly, the only constraint is that agents should not INFER properties from the URI. As long as things are explicitly defined (by spec, by the server, or otherwise), agents can and do make use of information embedded in the URI. [1]http://tech.groups.yahoo.com/group/rest-discuss/message/5369 --Chuck
Hugh Winkler wrote: > Nice articles by Mark Nottingham: > > http://www.mnot.net/cache_docs/ Damn that's a lot to read!!! But seriously, thanks. :-) > Squid's a free proxy/cache, Interesting. I'm definitely going to check that out. Thanks. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Bill Venners wrote: > First of all, as I think > you've pointed out you could have all of your various > hierarchies to navigate down, but at the end when you get to > the endpoint, you could redirect to a canonical form. If you > don't have canonical URIs, then you need not jolt the user by > that redirect at the end, which may be a usability plus, but > you may make it harder to find the page via search engines, > which is a usability minus (and probably a business minus for > your website). Well, I want to explore why it is harder for search engines to find that page. And can we change that? Or consider the problem from a different perspective? Google tweaks their algorithms quarterly. Why could they not include an update given new guidance? The rest would follow. > It is also a caching minus, because the cache > won't know it already has something that came via a different > URI. That can be a usability minus in the form of slower > perceived response time, and a business minus in terms of > higher bandwidth costs. Again, I'd like to explore why that is and see if there are not new ways to look at the problem. > As a result, I'd probably lean > towards having canonical URIs. As you state, it is a tradeoff. But in my research I'm seeing too many benefits to ignore so I'm motivated to come up with any alternate approach with less trade offs. BTW, it's hard to convey the full depth and breadth of what I've learn because I haven't finished my research. I know intuitively but am not ready to make the full case on paper. I think it was my mistake to give you the courtsey of an explanation before I was prepared because you've now gone away unconvinced and (by human nature) are less likely to reconsider the issue when I publish. FWIW. (But at least you were respectful about it. :) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Walden Mathews wrote: > Mike, > > : But in a nutshell, for the one example the benefits are URL > structure that > : models the website's structure and contributing to a more > usable website and > : more directly usable URL structure. > > It's not clear who reaps these "benefits". But I think you > are trying to design a web in which clients learn you rules > of hierarchy and then navigate your site by synthesizing > URL's by following the rules. The benefits accrue to web users but more so to website owners. For web users it increases the intuitiveness of a website and, if they are bloggers or cite links in some other activity, it makes it easier for them to do so. For one example, look at Wikipedia's URLs; is it a wonder there are linked as often at they are? http://en.wikipedia.org/wiki/REST IF it were instead http://www.wikipedia.org/topic.php?topicID=7937521&lang=en-US&source-ie7&en- US&ie=utf8&oe=utf8 I can guarantee you it would not be linked as much. I could remember and compose the former, there is no way I could remember the latter (FYI, I frequently like to Wikipedia just by prefixing a term with "www.wikipedia.org/wiki/") And when people link a site, the benefits acrue to the site. If I'm at a party and someone asked me about some pictures I took, I can just tell them to go to: http://www.flickr.com/photos/mikeschinkel My earlier albums on Snapfish.com, which are only visible if I send and invite, have a URL like this to access (WTF?!?): http://www1.snapfish.co.uk/share/p=12481168315141016/l=238932350/g=25011088/ cobrandOid=1007/otsc=SYE/otsi=SALB I share an appreciation for motorcycles with my dad. I like to send him links in email. Let's assume the link breaks. Which one is he likely to be able to fix? (and which one is least likely to break?) http://www.suzukicycles.com/Products/DRZ400SMK7/ http://www.kawasaki.com/Products/Detail.aspx?id=201 http://www.yamaha-motor.com/sport/products/modelhome/180/0/home.aspx http://powersports.honda.com/motorcycles/sport/model.asp?ModelName=RC51&Mode lYear=2006&ModelId=RVT1000R6 But if they were this instead this, it would be much easier for dad to identify and fix those broken links (and much less likely they'd break): http://www.suzukicycles.com/bikes/drz400sm/2007/ http://www.kawasaki.com/bikes/ninja-650r/2007/ http://www.yamaha-motor.com/bikes/fjr12300a/2007/ http://powersports.honda.com/bikes/rc51/2006/ Now, let's say that I wanted to send him a link to look at the 50th Anniversary Sportster. See the link below? Tell me what I should send him. Go head. Open it up. And tell me what link. http://www.harley-davidson.com/wcm/Content/Pages/2007_Motorcycles/2007_Motor cycles.jsp?locale=en_US&bmLocale=en_US Now what if I wrote a blog article about that same motorcycle; what would I link to? (Well, it's okay, I'd never send dad a link to a Harley or blog about one either. ;-) So you see, there as so many small reasons why well designed URLs are a benefit. But if you don't drill down to see those ways, it doesn't seem like much at 50,000 feet. > Meanwhile, URIs are supposed to be opaque from the client > angle, and it should by hypertext that is guiding them along > the path from one resource to the next. Have you read up > much on these concepts? Funny you should mention the URI Opacity Axiom[1]. Over the past 3-4 months, I've probably read at least the first 100 references returned by googling "uri opacity." At this point, I'm going to go out an a limb and say I've got a really good grasp of that ole' uri opacity axiom. What's more, I've very closely followed the metaDataInURIs[2] finding by TAG (have you read that one?) What all this research has told me is the vast majority of people misinterpret the URI Opacity axiom, attempt to apply it far more broadly than it was ever intended to be applied, and do so with such a zeal that I wonder why this false doctrine has so excited their imaginations. And I don't say it's been misinterpreted because "I interpreted it correctly when others failed." No, I'm not that arrogant. I say it because I've seen specific and explicit guidance given by TimBL and RoyTF on mailing list discussions I've found during my research clarifying the issue. In a nutshell, it goes like this: 1.) A URI should be considered opaque by a client if the URI authority (server) has given it no explicit guidance. 2.) The URI authority (server) need not treat it's URLs as opaque; indeed a server SHOULD organize its URLs logically and understandably to humans. 3.) In the case of a URL, the only part that needs to be considered opaque is the path because the rest is specified (okay, an uninformed client shant look at domains names like looking at "en.wikipedia.org" and guessing it is English and hence de.wikipedia.org would be German.) 4.) HOWEVER, a URI authority (server) is free to provide explicit guidance to a client and which point the client is free to treat the URIs as transparent. 5.) Further, the URI Opacity concern is related to *machines*, not humans because humans have an error correcting mechanism called intelligence. If they go to the "/cart/" URL and it displays information about wheelbarrows instead of a shopping cart, the human can figure it out and continue looking. The machine is yet to be capable until programmed in advance for that problem. So, nothing in my prior examples violates the URI Opacity Axiom[1] in any way. My examples actually follows the TAG finding[2] section "2.5: URIs that are convenient for people to use", empowers "2.2: Guessing information from a URI", and leverages "2.4: Authority use of URI metadata." Ironically, cleaner and more obvious URLs are more likely to be transcribed consistently then URLs with lots of query parameters that are 2 to 3 times as long and almost impossible for non-programmers to read. In this way these clean URLs improve cache efficiency. Because URI Opacity is so misinterpreted and for some reason has been misrepresented with such a passion, I plan to write a much longer blog article on the subject once I'm able to gather all my research (probably 2-3 months from now.) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ [1] http://www.w3.org/DesignIssues/Axioms.html#opaque [2] http://www.w3.org/2001/tag/doc/metaDataInURI-31
Walden Mathews wrote: > : But in a nutshell, for the one example the benefits are URL > structure that > : models the website's structure and contributing to a more > usable website and > : more directly usable URL structure. > > It's not clear who reaps these "benefits". But I think you > are trying to design a web in which clients learn you rules > of hierarchy and then navigate your site by synthesizing > URL's by following the rules. > > Meanwhile, URIs are supposed to be opaque from the client > angle, and it should by hypertext that is guiding them along > the path from one resource to the next. Have you read up > much on these concepts? Also, I just ran across this: http://www.w3.org/QA/2004/08/readable-uri -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Chuck Hinson wrote: > And if I understand Roy[1] correctly, the only constraint is that > agents should not INFER properties from the URI. As long as things > are explicitly defined (by spec, by the server, or otherwise), agents > can and do make use of information embedded in the URI. That's problematic if the place it is explicitly defined is somewhere other than the hypermedia the client has received. That said, I see nothing wrong in making guesswork on the part of a user more likely to work as they expect. Guesswork is a way that humans will always try to explore something and it's good UI to react to such guesses in a way they would expect. It's a foolish user though that can't tell the difference between a guess they tried and "the way things are meant to be".
Mike Schinkel wrote: > Well, I want to explore why it is harder for search engines to find that > page. And can we change that? Or consider the problem from a different > perspective? This is simple. The search engine finds x pages which it deems equally good as a response to a given query. Which does it link to first? There's no way of saying. > Google tweaks their algorithms quarterly. Why could they not > include an update given new guidance? The rest would follow. We can already give this guidance to Google. Google reacts appropriately when it receives a 301 Moved Permanently by treating the target of the location header as the URI to use. >> It is also a caching minus, because the cache >> won't know it already has something that came via a different >> URI. That can be a usability minus in the form of slower >> perceived response time, and a business minus in terms of >> higher bandwidth costs. > > Again, I'd like to explore why that is and see if there are not new ways to > look at the problem. The same reason you don't know whether what's behind door number 1 is the same as what's behind door number 2. Again, but a note behind door number 2 saying "go to door number 1 and open that instead" and your problem is solved. As a bonus we can also control how often we check door number 2 to see if that note is still there.
Mike Schinkel wrote: > Also, I just ran across this: > http://www.w3.org/QA/2004/08/readable-uri Being treated as opaque does not mean it can't be readable or respond well to guesswork.
Mike Schinkel wrote: > For one example, look at Wikipedia's URLs; is it a wonder there are linked > as often at they are? > > http://en.wikipedia.org/wiki/REST > > IF it were instead > > > http://www.wikipedia.org/topic.php?topicID=7937521&lang=en-US&source-ie7&en- > US&ie=utf8&oe=utf8 > > I can guarantee you it would not be linked as much. If one 301'd to the other then the benefits you suggest are there would still hold. > And when people link a site, the benefits acrue to the site. There are more benefits in people linking to one page on your site than 15 equivalent pages. > So you see, there as so many small reasons why well designed URLs are a > benefit. But if you don't drill down to see those ways, it doesn't seem like > much at 50,000 feet. The benefits of well-designed URIs is a different matter to the benefits of canonical URIs. Indeed one reason for having canonical URIs is to merge the benefits of a well-designed URI and a URI whose structure is forced upon you by other criteria or artefacts of implementation by having one permanently redirect to the canonical one. > 5.) Further, the URI Opacity concern is related to *machines*, not humans > because humans have an error correcting mechanism called intelligence. If > they go to the "/cart/" URL and it displays information about wheelbarrows > instead of a shopping cart, the human can figure it out and continue > looking. The machine is yet to be capable until programmed in advance for > that problem. No, the opacity concern is related to humans because if they phone and complain that they got information about wheelbarrows when they should have gotten a shopping cart I'm going to want to hit someone (not really, I get far worse calls than that, but they do make me want to hit someone). This is because humans have an error-generating mechanism called stupidity. Responding reasonably to guesswork is great. I strive to design URI schemes as intuitively as I can. But I will delete any bug report about guessed URIs not working unless its very definitely marked as a suggested enhancement.
How should I model methods of resources, that not only change their properties, but also create a process of some sort. For example, say I have an application to manage the construction of buildings. Given a building plan, I can instruct it to 'build()' the building. This will cause the server to interact with other systems to purchase materials, set up tasks for employees etc. Should I use 'POST' with a 'method=build' parameter to the 'plan' URI? Thanks, Ittay
Hi Ittay, the kind of question you ask indicates that you have no yet understood a fundamental aspect of REST, namely that there is a uniform interface. You do not get any other methods as the uniform set (the HTTP methods in the case of using HTTP as an architecture) Without any intention to be rude, I suggest you take a while to read through some of the REST material out on the Web (the REST wiki at http://rest.blueoxen.net is a good starting point as is Paul's site at http://prescod.com/rest ). This will help in future discussions. >Should I use 'POST' with a 'method=build' parameter to the 'plan' URI? No, this is about the worst from a REST POV as it is hidden RPC. Think along these lines: Provide a means for clients to pick a resource that behaves the way they are interested in (some hypermedia the clients come accross could declare a resource as a BuildingManager) and then use POST to submit the building plan. POST /BuildingManager Content-Type: application/blueprint <-- this type is fictous; it would need to be standardized within the realm of your system <blueprint> <wall location="...."/> </blueprint> IOW, in a RESTful system clients late-bind to resources based on runtime-declared abstract bahavior of those resources and they communicate via a fixed set of methods. HTH, Jan >Thanks, >Ittay > > > > >Yahoo! Groups Links > > > > >
thanks for your example and site reference. i did know that i use only HTTP methods, what i lacked is how to model my business methods. i am a newbie at this (maybe i should have indicated that) may i impose with a followup? say i have a person entity in my application. this entity has several business methods (eat, sleep, work). how should i model a rest api for these? create a PersonManager resource (which then will need to accept as argument what action to do), or create resources per action (/person/ittay/eat maybe?). thanks, ittay Jan Algermissen wrote: > Hi Ittay, > > the kind of question you ask indicates that you have no yet understood a fundamental aspect of REST, namely that there is a uniform interface. You do not get any other methods as the uniform set (the HTTP methods in the case of using HTTP as an architecture) > > Without any intention to be rude, I suggest you take a while to read through some of the REST material out on the Web (the REST wiki at http://rest.blueoxen.net is a good starting point as is Paul's site at http://prescod.com/rest ). This will help in future discussions. > > >> Should I use 'POST' with a 'method=build' parameter to the 'plan' URI? > > No, this is about the worst from a REST POV as it is hidden RPC. > > Think along these lines: > > Provide a means for clients to pick a resource that behaves the way they are interested in (some hypermedia the clients come accross could declare a resource as a BuildingManager) and then use POST to submit the building plan. > > POST /BuildingManager > Content-Type: application/blueprint <-- this type is fictous; it would need to be standardized within the realm of your system > > <blueprint> > <wall location="...."/> > </blueprint> > > > IOW, in a RESTful system clients late-bind to resources based on runtime-declared abstract bahavior of those resources and they communicate via a fixed set of methods. > > HTH, > > Jan > > > > > >> Thanks, >> Ittay >> >> >> >> >> Yahoo! Groups Links >> >> >> >> >> > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
: Hmm. You're right that that document defines agents as both software
: and people, but I suspect most web users haven't read that document
: and may not know to not infer anything from URIs. My belief is that
: people do use the URI for clues as to information architecture, and
: therefore I think it is appropriate to design URIs with that in mind.
Present the "information architecture" through hypertext instead.
Benefits:
1. You train your user to use the system at full leverage.
2. You retain encapsulation of your implementation, and are
then free to change it without breaking your clients.
: Mike can do that too, but all those things need to be at some URI
: each, so why not select a canonical URI for each resource that may
: help users figure out where they are in his information architecture.
: Users aren't depending on a URI meaning what they guess it means.
: They just try it and see if the representation matched their
: expectations. If it doesn't, they will quickly try something else. It
: is the representation that they ultimately depend on.
What do we mean by "canonical" here? I thought, in this context,
it only meant "authorized", as in "which URL to redirect an alias to".
Walden
: : For one example, look at Wikipedia's URLs; is it a wonder there are linked : as often at they are? : : http://en.wikipedia.org/wiki/REST : : IF it were instead : : http://www.wikipedia.org/topic.php?topicID=7937521&lang=en-US&source-ie7&en- : US&ie=utf8&oe=utf8 : : I can guarantee you it would not be linked as much. I could remember and : compose the former, there is no way I could remember the latter (FYI, I : frequently like to Wikipedia just by prefixing a term with : "www.wikipedia.org/wiki/") 1. I can tell you are in sales. 2. You make assumptions I don't: have you heard of cut and paste? : If I'm at a party and someone asked me about some pictures I took, I can : just tell them to go to: : : http://www.flickr.com/photos/mikeschinkel : I guess if they can remember that, several hours and several beers later, then they can probably also remember your name. That's better than I can do. Typically, I email or IM links to people. As far as typing into the location bar of the browser, I avoid it like the plague. : I share an appreciation for motorcycles with my dad. I like to send him : links in email. Let's assume the link breaks. Which one is he likely to be : able to fix? (and which one is least likely to break?) Breaks? You mean line breaks? Yes, I agree with *short* URI in principle. : But if they were this instead this, it would be much easier for dad to : identify and fix those broken links (and much less likely they'd break): I feel bad for your dad in this case, and if I were he, I'd stick to fixing motorcycles, because it would entail much less futile typing. Honestly, Mike, show your dad how to cut and paste broken (line wrapped) URI back together. As for other breakage, forget it. : : http://www.suzukicycles.com/bikes/drz400sm/2007/ : http://www.kawasaki.com/bikes/ninja-650r/2007/ : http://www.yamaha-motor.com/bikes/fjr12300a/2007/ : http://powersports.honda.com/bikes/rc51/2006/ : : Now, let's say that I wanted to send him a link to look at the 50th : Anniversary Sportster. See the link below? Tell me what I should send him. : Go head. Open it up. And tell me what link. I'm afraid.... Walden
Hmmm. I'm a bit of a newbie too but let me take a stab at this as an exercise. Now let me first say that the way you posed the question is very "objecty" or "servicey" to begin with and makes it a bit hard to answer. You've basically said "I have a service with methods A, B, and C; how do I make it resource based?" The problem is that REST has no concept of class-specific methods, so without knowing what A, B, and C do, its hard to answer your question. In short, there is no straight forward transformation of an abstract service to abstract resources. (At least not that I know of...) They are two very different ways of modeling the interface. I can't transform one type of solution to another without understanding the problem. But I can infer a bit about the problem from the fact you are using the concept of a person and things a person can do (eat, sleep, work). I'll assume here that sleep() and work() change the state of the person (to sleeping and working respectively). Maybe there is a substate when working that says what work you are doing. Let's also assume that eat() adds some food items to a list of things eaten. So to work (on building a bike shed): PUT /person/ittay/currentactivity Content-Type: application/x-activity+xml <activity> <type>work</type> <sub-type>building a bike shed</sub-type> </activity> To sleep: PUT /person/ittay/currentactivity Content-Type: application/x-activity+xml <activity> <type>sleep</type> </activity> To eat an apple: POST /person/ittay/stomach Content-type: application/x-food+xml <food>apple</food> Now these actions could also kick off related business processes. I think the only method that can't is GET. Note that you could use get to retrieve the current activity from /person/ittay/currentactivity or the list of things eaten from /person/ittay/stomach. --- In rest-discuss@yahoogroups.com, Ittay Dror <ittayd@...> wrote: > > thanks for your example and site reference. > > i did know that i use only HTTP methods, what i lacked is how to model my business methods. i am a newbie at this (maybe i should have indicated that) > > may i impose with a followup? > > say i have a person entity in my application. this entity has several business methods (eat, sleep, work). how should i model a rest api for these? create a PersonManager resource (which then will need to accept as argument what action to do), or create resources per action (/person/ittay/eat maybe?). > > thanks, > ittay > > Jan Algermissen wrote: > > Hi Ittay, > > > > the kind of question you ask indicates that you have no yet understood a fundamental aspect of REST, namely that there is a uniform interface. You do not get any other methods as the uniform set (the HTTP methods in the case of using HTTP as an architecture) > > > > Without any intention to be rude, I suggest you take a while to read through some of the REST material out on the Web (the REST wiki at http://rest.blueoxen.net is a good starting point as is Paul's site at http://prescod.com/rest ). This will help in future discussions. > > > > > >> Should I use 'POST' with a 'method=build' parameter to the 'plan' URI? > > > > No, this is about the worst from a REST POV as it is hidden RPC. > > > > Think along these lines: > > > > Provide a means for clients to pick a resource that behaves the way they are interested in (some hypermedia the clients come accross could declare a resource as a BuildingManager) and then use POST to submit the building plan. > > > > POST /BuildingManager > > Content-Type: application/blueprint <-- this type is fictous; it would need to be standardized within the realm of your system > > > > <blueprint> > > <wall location="...."/> > > </blueprint> > > > > > > IOW, in a RESTful system clients late-bind to resources based on runtime-declared abstract bahavior of those resources and they communicate via a fixed set of methods. > > > > HTH, > > > > Jan > > > > > > > > > > > >> Thanks, > >> Ittay > >> > >> > >> > >> > >> Yahoo! Groups Links > >> > >> > >> > >> > >> > > > > > -- > =================================== > Ittay Dror, > Chief architect, > R&D, Qlusters Inc. > ittayd@... > +972-3-6081994 Fax: +972-3-6081841 > > www.openqrm.org - Data Center Provisioning >
Hi-
Been lurking for a while, but this is my first time posting a question
here, so forgive the newbieness of it. :)
I've seen a couple of different ways to link to resources but I
haven't seen a good argument for one way or another. Are any of these
more correct than the other? Or is there some other better way?
<foos uri="http://api.example.com/foo/">
<foo uri="http://api.example.com/foo/1">
<short_description>Foo 1</short_description>
</foo>
<foo uri="http://api.example.com/foo/2">
<short_description>Foo 2</short_description>
</foo>
<foo uri="http://api.example.com/foo/3">
<short_description>Foo 3</short_description>
</foo>
</foos>
<foos href="http://api.example.com/foo/">
<foo href="http://api.example.com/foo/1">
<short_description>Foo 1</short_description>
</foo>
<foo href="http://api.example.com/foo/2">
<short_description>Foo 2</short_description>
</foo>
<foo href="http://api.example.com/foo/3">
<short_description>Foo 3</short_description>
</foo>
</foos>
<foos xlink:type"simple" xlink:href="http://api.example.com/foo/">
<foo xlink:type"simple" xlink:href="http://api.example.com/foo/1">
<short_description>Foo 1</short_description>
</foo>
<foo xlink:type"simple" xlink:href="http://api.example.com/foo/2">
<short_description>Foo 2</short_description>
</foo>
<foo xlink:type"simple" xlink:href="http://api.example.com/foo/3">
<short_description>Foo 3</short_description>
</foo>
</foos>
Cheers,
Michael
Hi Walden, On Jan 9, 2007, at 7:51 AM, Walden Mathews wrote: > : Hmm. You're right that that document defines agents as both software > : and people, but I suspect most web users haven't read that document > : and may not know to not infer anything from URIs. My belief is that > : people do use the URI for clues as to information architecture, and > : therefore I think it is appropriate to design URIs with that in > mind. > > Present the "information architecture" through hypertext instead. > A agree he should present the information architecture through hypertext, but it need not be "instead." It can be in addition to presenting it in the URI. Like it or not, both the hypertext and the URI are part of the user interface of a web application. > Benefits: > > 1. You train your user to use the system at full leverage. > What do you mean by "full leverage?" > 2. You retain encapsulation of your implementation, and are > then free to change it without breaking your clients. > I'm not sure what you mean by this either. I'm recommending URI design from a user perspective, not from the perspective of what makes it easy to implement. Thus changing the implementation does not imply changing the URI. > : Mike can do that too, but all those things need to be at some URI > : each, so why not select a canonical URI for each resource that may > : help users figure out where they are in his information > architecture. > : Users aren't depending on a URI meaning what they guess it means. > : They just try it and see if the representation matched their > : expectations. If it doesn't, they will quickly try something > else. It > : is the representation that they ultimately depend on. > > What do we mean by "canonical" here? I thought, in this context, > it only meant "authorized", as in "which URL to redirect an alias to". > Yes, that's what I mean by canonical. A canonical form, to which other URIs that may mean the same thing redirect to. In my case, the reason I had multiple URIs referring to the same resource anyway was that by default that's what Java Servlets gave me. Speaking of servlets, last night I tried canonicalizing away a trailing question mark, and discovered that there's now way to detect that situation with the servlet API. I wanted to transform: http://www.artima.com/articles? Into: http://www.artima.com/articles But unfortunately I can't. Search engines might infer that these are the same, and canonicalize this themselves. By the "rules," of course, they shouldn't. They should treat these URIs opaquely, and since they are different, not infer that one means the other. But if in 99.9% of the cases out there this means the same thing, and therefore this inferring allows the search engine to give better results to their users, they just might do it. Search engines must design for reality, and should web app designers. In reality, users do look to the URI for hints at the information architecture. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
what troubles me is that most examples i find to explain rest use simple cases, where no logic is actually running, and so it is easy to transfer services to properties (since the services do very little) so here is my real world use case: openQRM is a platform for provisioning and managing services in a DC. a service is an entity named VirtualEnvironment that encapsulates both a set of policies on how to provision and manage the real service and a capture of the real service's state (resources, utilization, error, etc.) we can do many actions on a service, but the primary ones are to start and stop it. starting means resources are selected from a pool and assigned to work for the service, the software is deployed on them and the service is monitored. stopping is the reverse. obviously openQRM does all this. the client just requests to 'start' or 'stop'. now, it is hard for me to see 'start' and 'stop' as values of the 'currentactivity' property. if anything, it looks like a masquerade of the RPC equivalent of posting to the VirtualEnvironment entity with the pair 'action=start'. thank you for your help, ittay wahbedahbe wrote: > > > Hmmm. I'm a bit of a newbie too but let me take a stab at this as an > exercise. > > Now let me first say that the way you posed the question is very > "objecty" or "servicey" to begin with and makes it a bit hard to > answer. You've basically said "I have a service with methods A, B, and > C; how do I make it resource based?" The problem is that REST has no > concept of class-specific methods, so without knowing what A, B, and C > do, its hard to answer your question. In short, there is no straight > forward transformation of an abstract service to abstract resources. > (At least not that I know of...) They are two very different ways of > modeling the interface. I can't transform one type of solution to > another without understanding the problem. > > But I can infer a bit about the problem from the fact you are using > the concept of a person and things a person can do (eat, sleep, work). > I'll assume here that sleep() and work() change the state of the > person (to sleeping and working respectively) . Maybe there is a > substate when working that says what work you are doing. Let's also > assume that eat() adds some food items to a list of things eaten. > > So to work (on building a bike shed): > PUT /person/ittay/ currentactivity > Content-Type: application/ x-activity+ xml > > <activity> > <type>work</ type> > <sub-type>building a bike shed</sub-type> > </activity> > > To sleep: > PUT /person/ittay/ currentactivity > Content-Type: application/ x-activity+ xml > > <activity> > <type>sleep< /type> > </activity> > > To eat an apple: > POST /person/ittay/ stomach > Content-type: application/ x-food+xml > > <food>apple< /food> > > Now these actions could also kick off related business processes. I > think the only method that can't is GET. Note that you could use get > to retrieve the current activity from /person/ittay/ currentactivity or > the list of things eaten from /person/ittay/ stomach. > > --- In rest-discuss@ yahoogroups. com > <mailto:rest-discuss%40yahoogroups.com>, Ittay Dror <ittayd@...> wrote: > > > > thanks for your example and site reference. > > > > i did know that i use only HTTP methods, what i lacked is how to > model my business methods. i am a newbie at this (maybe i should have > indicated that) > > > > may i impose with a followup? > > > > say i have a person entity in my application. this entity has > several business methods (eat, sleep, work). how should i model a rest > api for these? create a PersonManager resource (which then will need > to accept as argument what action to do), or create resources per > action (/person/ittay/ eat maybe?). > > > > thanks, > > ittay > > > > Jan Algermissen wrote: > > > Hi Ittay, > > > > > > the kind of question you ask indicates that you have no yet > understood a fundamental aspect of REST, namely that there is a > uniform interface. You do not get any other methods as the uniform set > (the HTTP methods in the case of using HTTP as an architecture) > > > > > > Without any intention to be rude, I suggest you take a while to > read through some of the REST material out on the Web (the REST wiki > at http://rest. blueoxen. net <http://rest.blueoxen.net> is a good > starting point as is Paul's site > at http://prescod. com/rest <http://prescod.com/rest> ). This will help > in future discussions. > > > > > > > > >> Should I use 'POST' with a 'method=build' parameter to the 'plan' > URI? > > > > > > No, this is about the worst from a REST POV as it is hidden RPC. > > > > > > Think along these lines: > > > > > > Provide a means for clients to pick a resource that behaves the > way they are interested in (some hypermedia the clients come accross > could declare a resource as a BuildingManager) and then use POST to > submit the building plan. > > > > > > POST /BuildingManager > > > Content-Type: application/ blueprint <-- this type is > fictous; it would need to be standardized within the realm of your system > > > > > > <blueprint> > > > <wall location=".. .."/> > > > </blueprint> > > > > > > > > > IOW, in a RESTful system clients late-bind to resources based on > runtime-declared abstract bahavior of those resources and they > communicate via a fixed set of methods. > > > > > > HTH, > > > > > > Jan > > > > > > > > > > > > > > > > > >> Thanks, > > >> Ittay > > >> > > >> > > >> > > >> > > >> Yahoo! Groups Links > > >> > > >> > > >> > > >> > > >> > > > > > > > > > -- > > ============ ========= ========= ===== > > Ittay Dror, > > Chief architect, > > R&D, Qlusters Inc. > > ittayd@... > > +972-3-6081994 Fax: +972-3-6081841 > > > > www.openqrm. org - Data Center Provisioning > > > > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
On Tue, 2007-01-09 at 16:35 +0000, mmakunas wrote:
> I've seen a couple of different ways to link to resources but I
> haven't seen a good argument for one way or another. Are any of these
> more correct than the other? Or is there some other better way?
So, "@uri vs. @href", and separately, "using xlink".
I don't think it matters.
I'm not sure anyone is using xlink, though you might want to adopt it if
you think its model makes sense for your app, and it won't put off
consumers of your resources. The idea of a processing handling xlink'ed
values without understanding the "containing" document itself doesn't
really seem to work.
W.r.t @uri vs. @href (vs. @src, vs. @rdf:about, ...), I'd say: be
consistent; I'm not sure there's any generic tooling our there that
would leverage using one over the other.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org;echo ${a}@${b}
Ittay,
On Tue, 2007-01-09 at 22:39 +0200, Ittay Dror wrote:
> what troubles me is that most examples i find to explain rest use
> simple cases, where no logic is actually running, and so it is easy to
> transfer services to properties (since the services do very little)
> so here is my real world use case: openQRM is a platform for
> provisioning and managing services in a DC. a service is an entity
> named VirtualEnvironment that encapsulates both a set of policies on
> how to provision and manage the real service and a capture of the real
> service's state (resources, utilization, error, etc.)
> we can do many actions on a service, but the primary ones are to start
> and stop it. starting means resources are selected from a pool and
> assigned to work for the service, the software is deployed on them and
> the service is monitored. stopping is the reverse. obviously openQRM
> does all this. the client just requests to 'start' or 'stop'.
I think the way to look at it is from the other direction. All
communication involves the transfer of information. The HTTP verbs put
this feature of the communication front and centre by collapsing all of
the possible methods in the world down to a small set that transfer
information. Instead of saying {theService, "startService"}, we say
{"http://example.com/services/theService/running", "PUT", text/plain
"true"}. This message contains the same information, but puts the
request into a canonical form. It uses a standard method to transfer the
state and a standard content type to format the information.
Using standard methods and content types has architectural implications
that are not obvious when you write a single server or client/server
pair. Consider the possibility that the author of the client doesn't
want to write software just for your service. Consider the possibility
that they want it to work for a thousand service providers. By phrasing
the request in terms of standard methods and content types we increase
the likelyhood that the request will be understood by many service
providers.
Once I have written a client that works with your service, I can make it
work with other services just by providing a different url. In fact,
depending on the semantics of my client I could even point it at
completely different kinds of functions. I could turn a lightbulb on and
off. I could interact with a building automation system.
Using standard methods and content types also assits intermediataries
such as proxies in determining how to handle requests correctly. For
example, I might bar all PUT access to http://example.com/services not
owned by the user... but might still allow GET access or might allow GET
access to a different class of user. If we were not using standard
methods or if we were hiding actions away in urls it would be more
difficult to craft these kinds of rules.
Benjamin
Hi Roy, On Jan 4, 2007, at 5:37 PM, Roy T. Fielding wrote: >> http://www.artima.com/articles?o=a&t=java&p=4 > > Why don't you redirect to a permalink style URI? The ? will reduce > your cache effectiveness, and is mighty ugly. Well, I guess for > "give me a list of java articles sorted by title" that is okay, > since the articles themselves seem to have permalinks. Note that > > http://www.artima.com/articles/java/index;date;p4 > > is short and says more. YMMV. > I wanted to verify that I'm correctly reading 3.3. Path Component from the URI Generic Syntax doc: http://www.ietf.org/rfc/rfc2396.txt Basically in the path portion of a URI you can have a set of params at the end of a segment, each prepended by a ; char. And it is OK to have a ; char at the end. So these would be valid: http://www.artima.com/articles;oa;tjava;p5 http://www.artima.com/articles;p5 http://www.artima.com/articles And also: http://www.artima.com/articles;oa;tjava;p5; http://www.artima.com/articles;p5; http://www.artima.com/articles; Has anyone had any troubles in practice using URIs with params in their segments? Are people moving towards this approach rather than query parameters with their ?, &, and = chars? Thanks. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
On Jan 9, 2007, at 3:56 PM, Bill Venners wrote: > I wanted to verify that I'm correctly reading 3.3. Path Component > from the URI Generic Syntax doc: > > http://www.ietf.org/rfc/rfc2396.txt Toss it... you should be looking at http://www.ietf.org/rfc/rfc3986.txt > Basically in the path portion of a URI you can have a set of params > at the end of a segment, each prepended by a ; char. And it is OK > to have a ; char at the end. So these would be valid: > > http://www.artima.com/articles;oa;tjava;p5 > http://www.artima.com/articles;p5 > http://www.artima.com/articles > > And also: > > http://www.artima.com/articles;oa;tjava;p5; > http://www.artima.com/articles;p5; > http://www.artima.com/articles; > > Has anyone had any troubles in practice using URIs with params in > their segments? Not that I know of. > Are people moving towards this approach rather than query > parameters with their ?, &, and = chars? A few, but only when they are not using HTML forms to create the query. The vast majority just copy whatever they read in "CGI for Dummies". The only real difference is that some app frameworks will pre-parse the ?key=value args for you, so it is simpler to implement. It used to be that browsers and Squid would not cache any response containing a query segment, but that may have changed by now. ....Roy
[ Attachment content not displayed ]
Bill Venners <bv-svp@...> writes: > Basically in the path portion of a URI you can have a set of params > at the end of a segment, each prepended by a ; char. And it is OK to > have a ; char at the end. So these would be valid: > > http://www.artima.com/articles;oa;tjava;p5 > http://www.artima.com/articles;p5 > http://www.artima.com/articles > > And also: > > http://www.artima.com/articles;oa;tjava;p5; > http://www.artima.com/articles;p5; > http://www.artima.com/articles; But also: http://www.artima.com/articles;p1=1;/java;p2=3 http://www.artima.com/articles;p1=hello;p2=world/java/serverside;p4=xxx I once tried to get the servlet API expert group to understand this but at the time they didn't really get it. > Has anyone had any troubles in practice using URIs with params in > their segments? Are people moving towards this approach rather than > query parameters with their ?, &, and = chars? It would be good because it is so much more flexible. However, it is also really complicated to parse or represent. Personally, I stick with a path and then a query. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Hi Hugh, On Jan 9, 2007, at 4:21 PM, Hugh Winkler wrote: > On 1/9/07, Bill Venners <bv-svp@...> wrote:\ > >> Has anyone had any troubles in practice using URIs with params in >> their segments? > > Yes. Tomcat strips everything after the first semicolon before passing > the uri to the servlet, because that's where they stuff their stupid > ;JSESSIONID=.... when they can't use cookies. > Oh brother. Well that's a problem. About the only thing I use from J2EE right now in our new architecture is the parsing of query params. I was frustrated a few months ago when I wanted to figure out a way to custom encode a session ID into a URL. The spec doesn't allow for that as far as I could see. Yesterday I discovered the API doesn't allow me to canonicalize away a lone question mark at the end of a URI. I was thinking of switching from Tomcat to JETTY, so perhaps that will preserve the ; params. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
On 1/9/07, Bill Venners <bv-svp@...> wrote:\ > Has anyone had any troubles in practice using URIs with params in > their segments? Yes. Tomcat strips everything after the first semicolon before passing the uri to the servlet, because that's where they stuff their stupid ;JSESSIONID=.... when they can't use cookies. Hugh
Bill Venners <bv-svp@...> writes: > Oh brother. Well that's a problem. segment params were discussed for 2.4 Servlet API. As I recall people wanted to treat them as parameters (ie: the same as POST/form-data or query parameters). But you can't do that coz they're more complicated than that. > About the only thing I use from > J2EE right now in our new architecture is the parsing of query > params. I was frustrated a few months ago when I wanted to figure out > a way to custom encode a session ID into a URL. The spec doesn't > allow for that as far as I could see. Declare a filter that provides the getSession() method to the downstream targets. You'd have to use your own session implementation of course. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On Jan 9, 2007, at 4:21 PM, Hugh Winkler wrote: > Yes. Tomcat strips everything after the first semicolon before passing > the uri to the servlet, because that's where they stuff their stupid > ;JSESSIONID=.... when they can't use cookies. Crikey... why doesn't someone just fix the code? I mean, that is too broken for words -- if I did that in Apache httpd the users would lynch me (assuming it ever got past the dev list). ....Roy
Hi Nic, On Jan 9, 2007, at 4:13 PM, Nic James Ferrier wrote: > But also: > > http://www.artima.com/articles;p1=1;/java;p2=3 > http://www.artima.com/articles;p1=hello;p2=world/java/ > serverside;p4=xxx > I think segments are offset by slashes, so if you want /java to be a ; param you'd probably need to encode it as in: http://www.artima.com/articles;p1=1;%2Fjava;p2=3 Otherwise a relative link on that page of "fred.html" will attempt to be grabbed from: http://www.artima.com/articles;p1=1;/fred.html (I think.) Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Bill Venners <bv-svp@...> writes: > Hi Nic, > > On Jan 9, 2007, at 4:13 PM, Nic James Ferrier wrote: > >> But also: >> >> http://www.artima.com/articles;p1=1;/java;p2=3 >> http://www.artima.com/articles;p1=hello;p2=world/java/ >> serverside;p4=xxx >> > I think segments are offset by slashes, so if you want /java to be > a ; param you'd probably need to encode it as in: > > http://www.artima.com/articles;p1=1;%2Fjava;p2=3 > > Otherwise a relative link on that page of "fred.html" will attempt to > be grabbed from: > > http://www.artima.com/articles;p1=1;/fred.html You misunderstand my understanding of the spec /8-> I was trying to say that I think that each segment can have parameters associated with it, eg in the path: /articles;p1=hello;p2=world/java;p3=nice;p5=day the segment "/articles" can have some parameters: p1=hello p2=world and the segment "/java" can have some parameters: p4=nice p5=day This certainly used to be the case. I'm not sure what the current situation is. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Elliotte Harold wrote: > Mike Schinkel wrote: > > > The internal admins configuring the firewall in many places are > > clueless (believe me, I ran a business for a while where that was > > truly the case but I couldn't afford to hire anyone better. And it > > will be the case in many small to medium size businesses.) > > The market will deal with businesses like that. I believe that the market doesn't care about that level of detail, only academics and standard committees care to that extent. I believe the market cares about what works for them to achieve their specific goals. > I don't see why genuinely competent organizations should... You are defining "genuinely competent organizations" using one measure; their adherence to a technical ideal. There are many other measures of a competent organization, and the market views earnings of "for-profit" companies as one of it's most important concerns. If following a certain technical ideal depresses earnings then while you would view the organization as genuinely competent there are a large group of others, especially shareholders and most employees who would view the company as grossly incompetent. I don't mean to discredit your view; I instead mean to point out that they are numerous views and that those views should be weighed and balanced accordingly. Suggesting the other view is not important may make one feel better, but it won't achieve any related improvements. > have to put up with bad architectures to support their > pointy-haired competitors. Aside from the unnecessary denigration which I will ignore, you may have misunderstood; I didn't mean "support competitors", I meant "compete with competitors." > It seems suspicious to me that these purported admins who are > so incompetent they can't properly manage PUT and DELETE knew > enough to block these methods in the first place. I suspect > what may really be going on is unchanged defaults in the > firewalls and proxy servers. If indeed that's the case then > it's much easier to fix the problem at its source by > educating a relatively small number of proxy and firewall vendors. I made an assumption in my comments which you ferreted out; that yes I was talking about defaults where users of proxies and firewalls who barely have the skills (or the money) to get them up and running, let alone optimize them. With many organizations, even large one, use the same "Jell-O" model [1] for infrastructure configuration as some companies use for shipping software; when it stops quivering it's done! OTOH, Nic was speaking about the large organizations where one departments often didn't care about the concerns of others. > Indeed all you may need to do when a customer tells you your > system seems broken is say, "Oh, you're using proxy X? That's > broken and non-spec compliant. Use proxy Y instead and all > will be fine." That's fine, assuming that information gets to the right person in the company (which is a huge assumption) and that the right person doesn't have other tasks they view as higher priority (which is another big assumption.) > Of course, this is not a binary situation. Some will fix > their systems and some won't. My experience leads me to > believe that when you insist on spec compliance, more people > will fix their systems and come into compliance than won't. I wouldn't disagree with this. > You will lose a few percent. That, unfortunately, is the problem. When you are in government or academia you can usually afford to be draconian. But in the business world that "few percent" is often the difference between meeting quarterly goals or not. For small companies it can be the difference between making payroll or not. I'm well aware of the latter as I've been in the role of having to meet payroll for more than 20 years of my career. I don't know your experience; have you? A for-profit business cannot choose to enforce a standard on its customers over achieving either of the aforementioned two things. Taking what many, who are not technical, would view as an overly pedantic posture on standards over meeting revenue projections would most certainly lead shareholders to replacing management at best, or to suing them at worst. > However if you try and support > all the broken and brain damaged networks out there, you do > far more damage to everyone. You end up hurting the > compliant customers to support the noncompliant ones in a > dozen different, subtle ways. Maximum net benefit to all > involved is achieved by jettisoning the truly incompetent > organizations that will not and cannot learn the proper way > to do things. Unfortunately, only people who are drawn to the technology for personal interest see this as important enough to allow it to affect other factors. For another perspective on this, ask the WHATWG editor Ian Hickson if he thinks browser vendors will support standards that could potentially minimize their marketshare in any way.[2] Almost no company will be willing to limit it's marketshare for a technical ideal. The way to achieve the technical ideal on as broad basis as possible is to make that support painless, not by ignoring market realities with the claim that "it is the right thing to do for the greater good." -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ [1] http://www.google.com/search?q="Don't+shake+the+Jell-O" [2] http://listserver.dreamhost.com/pipermail/whatwg-whatwg.org/2006-December/00 8600.html
Dr. Ernie Prabhakar wrote: > I'm a static resource bigot Me to. > > A. Static > > http://www.foo.com/users/ > > http://www.foo.com/users/john-smith/ > > http://www.foo.com/users/john-smith/cell-phone/ > + shorter > + implies fixed resources > + implies a single unique resource > > B. Dynamic (Long) > > http://www.foo.com/?section=users > > http://www.foo.com/?section=users&user=john-smith > > http://www.foo.com/?section=users&user=john-smith&phone=cell-phone > I actually think this is the wrong contrast. I would propose instead: > > C. Dynamic (Short) > > http://www.foo.com/?user=* > > http://www.foo.com/?user=john-smith > > http://www.foo.com/?user=john-smith&phone=cell-phone > Since "section = users" seems redundant with "user=". > > Given "A" and "C", I actually think which is better becomes > context- dependent. If you have static tree of resources the > end user drills down, then "A" is the most natural. If you > have a huge database where the user could be sorting on many > different fields, then "C" might be the simpler construction. What specifically make C simplier in your view? HTML FORMs support C better than A, but I was explicitly asking about REST based systems which we all know HTML FORMs do not well support. Thanks for the comments. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Bill Venners wrote: > If I understand this correctly, the term resource in the > REST and HTTP context means a thing of value as perceived > by the people using the system. If all you have is a bunch > of URIs, you can't make any assumptions about whether > they refer to the same resource or not. But if you have a > bunch of URIs plus more semantic information about the > resources they represent, then you can make such > assumptions. Yes, that's where my thoughts are headed.. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Mark Baker wrote: > This is a FAQ, Mike. I'm sure a search for "query" in the > archives will turn up a gold mine of pros/cons. Oh, I'm well aware. I've got many printed pages from list discussions and web articles on the subject. But I wanted to get people's direct opinions because it is easier to get direct responses than to ferret out opinions based from context when the discussion was on other topics. > If you could put what you find on the RESTwiki (the FAQ > page in particular), it would be appreciated. I ultimately plan to blog about it, but could also reference it on the RESTwiki. However, the reason I was asking was because I was trying to gather evidence to present to Ian Hickson to support adding URI Template support to the action method of the form element in Web Forms 2.0. I sent an email to uri@... to that effect[1], but got to response. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ [1] http://lists.w3.org/Archives/Public/uri/2006Dec/0028.html
Caches ceased placing undue significance on the query string long ago. IIRC, that behavior became obsolete as PHP became more popular. Using query parameters in path segments is uninteresting. They make ugly object references, and most civilized web software translates human readable URLs to CGI environment variables using rewrite rules. See Rails, Python Routes/WSGI, and good old mod_rewrite. In theory, structured path segments allow more versatile delegation of authority, but it doesn't seem economically compelling so far. -Rob On 1/9/07, Roy T. Fielding <fielding@...> wrote: > On Jan 9, 2007, at 3:56 PM, Bill Venners wrote: > > I wanted to verify that I'm correctly reading 3.3. Path Component > > from the URI Generic Syntax doc: > > > > http://www.ietf.org/rfc/rfc2396.txt > > Toss it... you should be looking at > > http://www.ietf.org/rfc/rfc3986.txt > > > > Basically in the path portion of a URI you can have a set of params > > at the end of a segment, each prepended by a ; char. And it is OK > > to have a ; char at the end. So these would be valid: > > > > http://www.artima.com/articles;oa;tjava;p5 > > http://www.artima.com/articles;p5 > > http://www.artima.com/articles > > > > And also: > > > > http://www.artima.com/articles;oa;tjava;p5; > > http://www.artima.com/articles;p5; > > http://www.artima.com/articles; > > > > Has anyone had any troubles in practice using URIs with params in > > their segments? > > Not that I know of. > > > Are people moving towards this approach rather than query > > parameters with their ?, &, and = chars? > > A few, but only when they are not using HTML forms to create the query. > The vast majority just copy whatever they read in "CGI for Dummies". > The only real difference is that some app frameworks will pre-parse > the ?key=value args for you, so it is simpler to implement. > > It used to be that browsers and Squid would not cache any response > containing a query segment, but that may have changed by now. > > ....Roy > > > -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
"Mike Schinkel" <mikeschinkel@...> writes: > I made an assumption in my comments which you ferreted out; that yes I was > talking about defaults where users of proxies and firewalls who barely have > the skills (or the money) to get them up and running, let alone optimize > them. With many organizations, even large one, use the same "Jell-O" model > [1] for infrastructure configuration as some companies use for shipping > software; when it stops quivering it's done! > > OTOH, Nic was speaking about the large organizations where one departments > often didn't care about the concerns of others. I think Elliotte is correct that we could fix the problem by getting the proxy makers to change their proxies. However, how long would it take to fix the problem. The big organization that I was referring to had (amongst others) a Novell Netware proxy server. It was at least 10 years old. I recently made a trip to a medium sized company who were still using Microsoft Proxy Server 1.0. I don't even want to think about how old that is. > >> Indeed all you may need to do when a customer tells you your >> system seems broken is say, "Oh, you're using proxy X? That's >> broken and non-spec compliant. Use proxy Y instead and all >> will be fine." > > That's fine, assuming that information gets to the right person in the > company (which is a huge assumption) and that the right person doesn't have > other tasks they view as higher priority (which is another big > assumption.) I agree Mike. It's often really difficult to find out what is actually causing the problem. Unless you have network access to the client site in question it's almost impossible. And Elliotte's assertion doesn't work at all if we're talking about a web 2.0 business like digg or flikr. If those sites used PUT and it failed for everyone nehind a crappy proxy just how many of those failures would get reported to them? > Unfortunately, only people who are drawn to the technology for personal > interest see this as important enough to allow it to affect other factors. > For another perspective on this, ask the WHATWG editor Ian Hickson if he > thinks browser vendors will support standards that could potentially > minimize their marketshare in any way.[2] Almost no company will be willing > to limit it's marketshare for a technical ideal. The way to achieve the > technical ideal on as broad basis as possible is to make that support > painless, not by ignoring market realities with the claim that "it is the > right thing to do for the greater good." Spec compliance happens in the end. It just takes a long time. Of course, it would help if broken and rubbish implementations didn't get so widespread in the first place. But that's to do with the ridiculous amounts of hype that drives our industry. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On 1/9/07, Mike Schinkel <mikeschinkel@...> wrote: > However, the reason I was asking was because I was trying to gather evidence > to present to Ian Hickson to support adding URI Template support to the > action method of the form element in Web Forms 2.0. I sent an email to > uri@... to that effect[1], but got to response. Though in theory that sounds like a great idea, it would break existing clients which is a non-starter when defining HTML extensions. You'd need a new parameter to get around that problem. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Hi Nic, On Jan 9, 2007, at 6:02 PM, Nic James Ferrier wrote: > Bill Venners <bv-svp@...> writes: > >> I think segments are offset by slashes, so if you want /java to be >> a ; param you'd probably need to encode it as in: >> >> http://www.artima.com/articles;p1=1;%2Fjava;p2=3 >> >> Otherwise a relative link on that page of "fred.html" will attempt to >> be grabbed from: >> >> http://www.artima.com/articles;p1=1;/fred.html > > You misunderstand my understanding of the spec /8-> > > I was trying to say that I think that each segment can have parameters > associated with it, eg in the path: > > /articles;p1=hello;p2=world/java;p3=nice;p5=day > > the segment "/articles" can have some parameters: > > p1=hello > p2=world > > and the segment "/java" can have some parameters: > > p4=nice > p5=day > > > This certainly used to be the case. I'm not sure what the current > situation is. > Ah, yes, that was my reading of the spec too. That's just not what I thought you meant. So you meant what you said, and said what you meant. Nevertheless, I am only considering putting semicolons at the end of the entire path. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Hi Robert, On Jan 9, 2007, at 6:45 PM, Robert Sayre wrote: > Caches ceased placing undue significance on the query string long ago. > IIRC, that behavior became obsolete as PHP became more popular. > > Using query parameters in path segments is uninteresting. They make > ugly object references, and most civilized web software translates > human readable URLs to CGI environment variables using rewrite rules. > See Rails, Python Routes/WSGI, and good old mod_rewrite. In theory, > structured path segments allow more versatile delegation of authority, > but it doesn't seem economically compelling so far. > What do you mean by "in theory structured path segments allow more versatile delegation of authority?" Also, I personally am not sure whether semicolon separated params would be prettier or uglier than traditional query params. The plus is with the semicolon approach our URIs could require one less char per query param (Since we're just using one char for the param name, we can drop the '='). The minus is that people are more used to seeing traditional query params and might find it harder to mentally extract the path part of our URIs if they are full of trailing semicolons, and I like to use the path part to help users figure out our information architecture. The other thing I'm considering is using an extra semicolon at the end to indicate a user is logged in, so I can make better use of caching. If params are already semicolon separated, tacking on an extra one at the end seems more natural. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
thanks for your help. one final question, is it ok to model the API for 'status' as http://example.org/openQRM/service/abc123#status (thus stating more clearly it is a property of the service, and not a resource in itself) thanks, ittay John D. Heintz wrote: > > > Hello Ittay, > > Here's how I would approach the problem by way of small example. I've > used "====..." just to help with the formatting. > > The first thing I would do is break down the name/uri space like this: > ============ === > GET http://example. org/openQRM/ service/abc123 > <http://example.org/openQRM/service/abc123> > ============ === > HTTP/1.1 200 OK > > <?xml version="1.0" encoding="utf-8"?> > ... > <service name="..." status="started" ... > ============ === > > This indicates that we can get a representation of the service > identified by "abc123". > > We could then PUT or POST a modifed representation with > status="stopped", but I'll take a different approach. Let's have the > represenation provide more information via the hypertext: > > ============ === > GET http://example. org/openQRM/ service/abc123 > <http://example.org/openQRM/service/abc123> > ============ === > HTTP/1.1 200 OK > > <?xml version="1.0" encoding="utf-8"?> > ... > <service name="..." ... > <status current="started" href="abc123/status" ... > ============ == > > Now we have marked a new URI that is specially designed for the status > of the "abc123" service. > > From that URI: > ============ === > GET http://example. org/openQRM/ service/abc123/ status > <http://example.org/openQRM/service/abc123/status> > ============ === > HTTP/1.1 200 OK > > started > ============ === > > Let's say that we want to support a simple PUT on this resource to > enable changing the state of a service: > > ============ === > PUT http://example. org/openQRM/ service/abc123/ status > <http://example.org/openQRM/service/abc123/status> > > stopped > ============ === > HTTP/1.1 200 OK > > stopped > ============ === > > If ours was the only client controlling that service this would be > perfectly clear, but other clients (or the service itself) might have > already changed the status. Let's add some details that provide us with > information to do that. > > ============ === > GET http://example. org/openQRM/ service/abc123/ status > <http://example.org/openQRM/service/abc123/status> > ============ === > HTTP/1.1 200 OK > Last-Modified: Tue, 09 Jan 2007 22:34:37 GMT > ETag: "xyz-789" > > started > ============ === > > The Last-Modified should be clear, the ETag is a server-side hashed id > that can encode details like identity and version number. In our > subsequent PUT we can indicate to only do the work if the resource still > matches what we last saw (optimistic locking). > > ============ === > PUT http://example. org/openQRM/ service/abc123/ status > <http://example.org/openQRM/service/abc123/status> > If-Modified- Since: Tue, 09 Jan 2007 22:34:37 GMT > > stopped > ============ === > HTTP/1.1 200 OK > > stopped > ============ === > > Now it's safe for us to assume that "we" are the agent that did stop the > service. A failure response would have been returned if not. > > Stuff that I've ignored so far: > * content type: I'm just typing anything for example. Should always > prefer standard and interoperable content types. > * content negotiation: browsers and B2B agents may not want the same > representation content types > * Forms: How does this agent know it can PUT to that resource? What > values are allowed to be sent? > * probably a bunch more, I hope others jump in to correct me ;) > > Hope this helps provide a more clear example of the differences. > > John > > On 1/9/07, *Ittay Dror* < ittayd@qlusters. com > <mailto:ittayd@...>> wrote: > > what troubles me is that most examples i find to explain rest use > simple cases, where no logic is actually running, and so it is easy > to transfer services to properties (since the services do very little) > > so here is my real world use case: openQRM is a platform for > provisioning and managing services in a DC. a service is an entity > named VirtualEnvironment that encapsulates both a set of policies on > how to provision and manage the real service and a capture of the > real service's state (resources, utilization, error, etc.) > > we can do many actions on a service, but the primary ones are to > start and stop it. starting means resources are selected from a pool > and assigned to work for the service, the software is deployed on > them and the service is monitored. stopping is the reverse. > obviously openQRM does all this. the client just requests to 'start' > or 'stop'. > > now, it is hard for me to see 'start' and 'stop' as values of the > 'currentactivity' property. if anything, it looks like a masquerade > of the RPC equivalent of posting to the VirtualEnvironment entity > with the pair 'action=start'. > > thank you for your help, > ittay > > wahbedahbe wrote: > > > > > > Hmmm. I'm a bit of a newbie too but let me take a stab at this as an > > exercise. > > > > Now let me first say that the way you posed the question is very > > "objecty" or "servicey" to begin with and makes it a bit hard to > > answer. You've basically said "I have a service with methods A, > B, and > > C; how do I make it resource based?" The problem is that REST has no > > concept of class-specific methods, so without knowing what A, B, > and C > > do, its hard to answer your question. In short, there is no straight > > forward transformation of an abstract service to abstract resources. > > (At least not that I know of...) They are two very different ways of > > modeling the interface. I can't transform one type of solution to > > another without understanding the problem. > > > > But I can infer a bit about the problem from the fact you are using > > the concept of a person and things a person can do (eat, sleep, > work). > > I'll assume here that sleep() and work() change the state of the > > person (to sleeping and working respectively) . Maybe there is a > > substate when working that says what work you are doing. Let's also > > assume that eat() adds some food items to a list of things eaten. > > > > So to work (on building a bike shed): > > PUT /person/ittay/ currentactivity > > Content-Type: application/ x-activity+ xml > > > > <activity> > > <type>work</ type> > > <sub-type>building a bike shed</sub-type> > > </activity> > > > > To sleep: > > PUT /person/ittay/ currentactivity > > Content-Type: application/ x-activity+ xml > > > > <activity> > > <type>sleep< /type> > > </activity> > > > > To eat an apple: > > POST /person/ittay/ stomach > > Content-type: application/ x-food+xml > > > > <food>apple< /food> > > > > Now these actions could also kick off related business processes. I > > think the only method that can't is GET. Note that you could use get > > to retrieve the current activity from /person/ittay/ > currentactivity or > > the list of things eaten from /person/ittay/ stomach. > > > > --- In rest-discuss@ yahoogroups. com > > <mailto: rest-discuss% 40yahoogroups. com > <mailto:rest-discuss%40yahoogroups.com>>, Ittay Dror <ittayd@...> wrote: > > > > > > thanks for your example and site reference. > > > > > > i did know that i use only HTTP methods, what i lacked is how to > > model my business methods. i am a newbie at this (maybe i should have > > indicated that) > > > > > > may i impose with a followup? > > > > > > say i have a person entity in my application. this entity has > > several business methods (eat, sleep, work). how should i model a > rest > > api for these? create a PersonManager resource (which then will need > > to accept as argument what action to do), or create resources per > > action (/person/ittay/ eat maybe?). > > > > > > thanks, > > > ittay > > > > > > Jan Algermissen wrote: > > > > Hi Ittay, > > > > > > > > the kind of question you ask indicates that you have no yet > > understood a fundamental aspect of REST, namely that there is a > > uniform interface. You do not get any other methods as the > uniform set > > (the HTTP methods in the case of using HTTP as an architecture) > > > > > > > > Without any intention to be rude, I suggest you take a while to > > read through some of the REST material out on the Web (the REST wiki > > at http://rest <http://rest>. blueoxen. net <http://rest. > blueoxen. net <http://rest.blueoxen.net>> is a good > > starting point as is Paul's site > > at http://prescod. com/rest < http://prescod. com/rest > <http://prescod.com/rest>> ). This will help > > in future discussions. > > > > > > > > > > > >> Should I use 'POST' with a 'method=build' parameter to the > 'plan' > > URI? > > > > > > > > No, this is about the worst from a REST POV as it is hidden RPC. > > > > > > > > Think along these lines: > > > > > > > > Provide a means for clients to pick a resource that behaves the > > way they are interested in (some hypermedia the clients come accross > > could declare a resource as a BuildingManager) and then use POST to > > submit the building plan. > > > > > > > > POST /BuildingManager > > > > Content-Type: application/ blueprint <-- this type is > > fictous; it would need to be standardized within the realm of > your system > > > > > > > > <blueprint> > > > > <wall location=".. .."/> > > > > </blueprint> > > > > > > > > > > > > IOW, in a RESTful system clients late-bind to resources based on > > runtime-declared abstract bahavior of those resources and they > > communicate via a fixed set of methods. > > > > > > > > HTH, > > > > > > > > Jan > > > > > > > > > > > > > > > > > > > > > > > >> Thanks, > > > >> Ittay > > > >> > > > >> > > > >> > > > >> > > > >> Yahoo! Groups Links > > > >> > > > >> > > > >> > > > >> > > > >> > > > > > > > > > > > > > -- > > > ============ ========= ========= ===== > > > Ittay Dror, > > > Chief architect, > > > R&D, Qlusters Inc. > > > ittayd@... > > > +972-3-6081994 Fax: +972-3-6081841 > > > > > > www.openqrm. org - Data Center Provisioning > > > > > > > > > > -- > ============ ========= ========= ===== > Ittay Dror, > Chief architect, > R&D, Qlusters Inc. > ittayd@qlusters. com <mailto:ittayd@...> > +972-3-6081994 Fax: +972-3-6081841 > > www.openqrm. org <http://www.openqrm.org> - Data Center Provisioning > > > > Yahoo! Groups Links > > > (Yahoo! ID required) > > mailto:rest-discuss- fullfeatured@ yahoogroups. com > <mailto:rest-discuss-fullfeatured@yahoogroups.com> > > > > > > > -- > John D. Heintz > Principal Consultant > New Aspects of Software > Austin, TX > (512) 633-1198 > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
bad choice of characters. '#' is for fragment identifiers. maybe something like: http://example.org/openQRM/service/abc123:status and another question, is it ok to model the 'start' and 'stop' actions as 'action' resources? so, POST to http://example.org/openQRM/actions/start with data http://example.org/openQRM/service/abc123 thanks, ittay Ittay Dror wrote: > > > thanks for your help. > > one final question, is it ok to model the API for 'status' as > http://example. org/openQRM/ service/abc123# status > <http://example.org/openQRM/service/abc123#status> (thus stating more > clearly it is a property of the service, and not a resource in itself) > > thanks, > ittay > > John D. Heintz wrote: > > > > > > Hello Ittay, > > > > Here's how I would approach the problem by way of small example. I've > > used "====..." just to help with the formatting. > > > > The first thing I would do is break down the name/uri space like this: > > ============ === > > GET http://example. org/openQRM/ service/abc123 > > <http://example. org/openQRM/ service/abc123 > <http://example.org/openQRM/service/abc123>> > > ============ === > > HTTP/1.1 200 OK > > > > <?xml version="1.0" encoding="utf- 8"?> > > ... > > <service name="..." status="started" ... > > ============ === > > > > This indicates that we can get a representation of the service > > identified by "abc123". > > > > We could then PUT or POST a modifed representation with > > status="stopped" , but I'll take a different approach. Let's have the > > represenation provide more information via the hypertext: > > > > ============ === > > GET http://example. org/openQRM/ service/abc123 > > <http://example. org/openQRM/ service/abc123 > <http://example.org/openQRM/service/abc123>> > > ============ === > > HTTP/1.1 200 OK > > > > <?xml version="1.0" encoding="utf- 8"?> > > ... > > <service name="..." ... > > <status current="started" href="abc123/ status" ... > > ============ == > > > > Now we have marked a new URI that is specially designed for the status > > of the "abc123" service. > > > > From that URI: > > ============ === > > GET http://example. org/openQRM/ service/abc123/ status > > <http://example. org/openQRM/ service/abc123/ status > <http://example.org/openQRM/service/abc123/status>> > > ============ === > > HTTP/1.1 200 OK > > > > started > > ============ === > > > > Let's say that we want to support a simple PUT on this resource to > > enable changing the state of a service: > > > > ============ === > > PUT http://example. org/openQRM/ service/abc123/ status > > <http://example. org/openQRM/ service/abc123/ status > <http://example.org/openQRM/service/abc123/status>> > > > > stopped > > ============ === > > HTTP/1.1 200 OK > > > > stopped > > ============ === > > > > If ours was the only client controlling that service this would be > > perfectly clear, but other clients (or the service itself) might have > > already changed the status. Let's add some details that provide us with > > information to do that. > > > > ============ === > > GET http://example. org/openQRM/ service/abc123/ status > > <http://example. org/openQRM/ service/abc123/ status > <http://example.org/openQRM/service/abc123/status>> > > ============ === > > HTTP/1.1 200 OK > > Last-Modified: Tue, 09 Jan 2007 22:34:37 GMT > > ETag: "xyz-789" > > > > started > > ============ === > > > > The Last-Modified should be clear, the ETag is a server-side hashed id > > that can encode details like identity and version number. In our > > subsequent PUT we can indicate to only do the work if the resource still > > matches what we last saw (optimistic locking). > > > > ============ === > > PUT http://example. org/openQRM/ service/abc123/ status > > <http://example. org/openQRM/ service/abc123/ status > <http://example.org/openQRM/service/abc123/status>> > > If-Modified- Since: Tue, 09 Jan 2007 22:34:37 GMT > > > > stopped > > ============ === > > HTTP/1.1 200 OK > > > > stopped > > ============ === > > > > Now it's safe for us to assume that "we" are the agent that did stop the > > service. A failure response would have been returned if not. > > > > Stuff that I've ignored so far: > > * content type: I'm just typing anything for example. Should always > > prefer standard and interoperable content types. > > * content negotiation: browsers and B2B agents may not want the same > > representation content types > > * Forms: How does this agent know it can PUT to that resource? What > > values are allowed to be sent? > > * probably a bunch more, I hope others jump in to correct me ;) > > > > Hope this helps provide a more clear example of the differences. > > > > John > > > > On 1/9/07, *Ittay Dror* < ittayd@qlusters. com > > <mailto:ittayd@qlusters. com <mailto:ittayd%40qlusters.com>>> wrote: > > > > what troubles me is that most examples i find to explain rest use > > simple cases, where no logic is actually running, and so it is easy > > to transfer services to properties (since the services do very little) > > > > so here is my real world use case: openQRM is a platform for > > provisioning and managing services in a DC. a service is an entity > > named VirtualEnvironment that encapsulates both a set of policies on > > how to provision and manage the real service and a capture of the > > real service's state (resources, utilization, error, etc.) > > > > we can do many actions on a service, but the primary ones are to > > start and stop it. starting means resources are selected from a pool > > and assigned to work for the service, the software is deployed on > > them and the service is monitored. stopping is the reverse. > > obviously openQRM does all this. the client just requests to 'start' > > or 'stop'. > > > > now, it is hard for me to see 'start' and 'stop' as values of the > > 'currentactivity' property. if anything, it looks like a masquerade > > of the RPC equivalent of posting to the VirtualEnvironment entity > > with the pair 'action=start' . > > > > thank you for your help, > > ittay > > > > wahbedahbe wrote: > > > > > > > > > Hmmm. I'm a bit of a newbie too but let me take a stab at this as an > > > exercise. > > > > > > Now let me first say that the way you posed the question is very > > > "objecty" or "servicey" to begin with and makes it a bit hard to > > > answer. You've basically said "I have a service with methods A, > > B, and > > > C; how do I make it resource based?" The problem is that REST has no > > > concept of class-specific methods, so without knowing what A, B, > > and C > > > do, its hard to answer your question. In short, there is no straight > > > forward transformation of an abstract service to abstract resources. > > > (At least not that I know of...) They are two very different ways of > > > modeling the interface. I can't transform one type of solution to > > > another without understanding the problem. > > > > > > But I can infer a bit about the problem from the fact you are using > > > the concept of a person and things a person can do (eat, sleep, > > work). > > > I'll assume here that sleep() and work() change the state of the > > > person (to sleeping and working respectively) . Maybe there is a > > > substate when working that says what work you are doing. Let's also > > > assume that eat() adds some food items to a list of things eaten. > > > > > > So to work (on building a bike shed): > > > PUT /person/ittay/ currentactivity > > > Content-Type: application/ x-activity+ xml > > > > > > <activity> > > > <type>work</ type> > > > <sub-type>building a bike shed</sub-type> > > > </activity> > > > > > > To sleep: > > > PUT /person/ittay/ currentactivity > > > Content-Type: application/ x-activity+ xml > > > > > > <activity> > > > <type>sleep< /type> > > > </activity> > > > > > > To eat an apple: > > > POST /person/ittay/ stomach > > > Content-type: application/ x-food+xml > > > > > > <food>apple< /food> > > > > > > Now these actions could also kick off related business processes. I > > > think the only method that can't is GET. Note that you could use get > > > to retrieve the current activity from /person/ittay/ > > currentactivity or > > > the list of things eaten from /person/ittay/ stomach. > > > > > > --- In rest-discuss@ yahoogroups. com > > > <mailto: rest-discuss% 40yahoogroups. com > > <mailto:rest- discuss%40yahoog roups.com> >, Ittay Dror <ittayd@...> > wrote: > > > > > > > > thanks for your example and site reference. > > > > > > > > i did know that i use only HTTP methods, what i lacked is how to > > > model my business methods. i am a newbie at this (maybe i should have > > > indicated that) > > > > > > > > may i impose with a followup? > > > > > > > > say i have a person entity in my application. this entity has > > > several business methods (eat, sleep, work). how should i model a > > rest > > > api for these? create a PersonManager resource (which then will need > > > to accept as argument what action to do), or create resources per > > > action (/person/ittay/ eat maybe?). > > > > > > > > thanks, > > > > ittay > > > > > > > > Jan Algermissen wrote: > > > > > Hi Ittay, > > > > > > > > > > the kind of question you ask indicates that you have no yet > > > understood a fundamental aspect of REST, namely that there is a > > > uniform interface. You do not get any other methods as the > > uniform set > > > (the HTTP methods in the case of using HTTP as an architecture) > > > > > > > > > > Without any intention to be rude, I suggest you take a while to > > > read through some of the REST material out on the Web (the REST wiki > > > at http://rest <http://rest>. blueoxen. net <http://rest. > > blueoxen. net <http://rest. blueoxen. net > <http://rest.blueoxen.net>>> is a good > > > starting point as is Paul's site > > > at http://prescod. com/rest < http://prescod. com/rest > > <http://prescod. com/rest <http://prescod.com/rest>>> ). This will help > > > in future discussions. > > > > > > > > > > > > > > >> Should I use 'POST' with a 'method=build' parameter to the > > 'plan' > > > URI? > > > > > > > > > > No, this is about the worst from a REST POV as it is hidden RPC. > > > > > > > > > > Think along these lines: > > > > > > > > > > Provide a means for clients to pick a resource that behaves the > > > way they are interested in (some hypermedia the clients come accross > > > could declare a resource as a BuildingManager) and then use POST to > > > submit the building plan. > > > > > > > > > > POST /BuildingManager > > > > > Content-Type: application/ blueprint <-- this type is > > > fictous; it would need to be standardized within the realm of > > your system > > > > > > > > > > <blueprint> > > > > > <wall location=".. .."/> > > > > > </blueprint> > > > > > > > > > > > > > > > IOW, in a RESTful system clients late-bind to resources based on > > > runtime-declared abstract bahavior of those resources and they > > > communicate via a fixed set of methods. > > > > > > > > > > HTH, > > > > > > > > > > Jan > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >> Thanks, > > > > >> Ittay > > > > >> > > > > >> > > > > >> > > > > >> > > > > >> Yahoo! Groups Links > > > > >> > > > > >> > > > > >> > > > > >> > > > > >> > > > > > > > > > > > > > > > > > -- > > > > ============ ========= ========= ===== > > > > Ittay Dror, > > > > Chief architect, > > > > R&D, Qlusters Inc. > > > > ittayd@... > > > > +972-3-6081994 Fax: +972-3-6081841 > > > > > > > > www.openqrm. org - Data Center Provisioning > > > > > > > > > > > > > > > > -- > > ============ ========= ========= ===== > > Ittay Dror, > > Chief architect, > > R&D, Qlusters Inc. > > ittayd@qlusters. com <mailto:ittayd@qlusters. com > <mailto:ittayd%40qlusters.com>> > > +972-3-6081994 Fax: +972-3-6081841 > > > > www.openqrm. org <http://www.openqrm. org <http://www.openqrm.org>> - > Data Center Provisioning > > > > > > > > Yahoo! Groups Links > > > > > > (Yahoo! ID required) > > > > mailto:rest- discuss- fullfeatured@ yahoogroups. com > > <mailto:rest-discuss- fullfeatured@ yahoogroups. com > <mailto:rest-discuss-fullfeatured%40yahoogroups.com>> > > > > > > > > > > > > > > -- > > John D. Heintz > > Principal Consultant > > New Aspects of Software > > Austin, TX > > (512) 633-1198 > > > > -- > ============ ========= ========= ===== > Ittay Dror, > Chief architect, > R&D, Qlusters Inc. > ittayd@qlusters. com <mailto:ittayd%40qlusters.com> > +972-3-6081994 Fax: +972-3-6081841 > > www.openqrm. org - Data Center Provisioning > > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
> > say i have a person entity in my application. this entity has > several business methods (eat, sleep, work). how should i > model a rest api for these? create a PersonManager resource > (which then will need to accept as argument what action to > do), or create resources per action (/person/ittay/eat maybe?). Rather than modeling an API as having many methods, model the application as kinds of resources and their states. A resource has an identifier and a representation (it could have several representations, but we should keep the example simple for now). A client sets or gets the state of a resource, then takes action based on the responses to setting or getting the data. Perhaps rather than invoking a "person[27].eat()" method, you could add to the "person[27].goals" a value of "eat". POST /people/ittay/goals HTTP/1.1 Host: www.dror.com Content-Type: application/xml <goals> <goal priority="high">eat</goal> </goals> The server may decide that the request to eat is inappropriate, or the server may schedule the activity for later, or it may immediately take action - these could be indicated in a response that points to the current activities: 201 Created Location: /people/ittay/activities <activities> <activity>watch television</activity> <activity>eat</activity> </activities>
> > now, it is hard for me to see 'start' and 'stop' as values of > the 'currentactivity' property. if anything, it looks like a > masquerade of the RPC equivalent of posting to the > VirtualEnvironment entity with the pair 'action=start'. What happens if a client sends 'start' twice? In a state transition diagram, that would be fairly easy to describe the desired behavior. In an 'action oriented' approach, you'd have to special case that to prevent multiple initializations/etc.
Hey all: Someone sent me a link to Conduit's website [1] related to a project I'm working on. After looking at their website I learned they allow you to configure a company or community specific toolbar at their website that will install into IE or Firefox, and then distribute to customers/community members. They get their revenue by serving Google searches from their toolbar where they get a cut of the ad revenue, but it's almost completely seemless to the user. To test it I used one of their default templates for a Yahoo Group and then customized it a bit. I chose to create one for REST-discuss, and though I'm not normally a toolbar fan, after I was finished I was pleasently surprised at its usability! It allows you to: -- Search REST Discuss using Google (I could configure to also search other engines if anyone cares) -- Reference an RSS Feed of recent REST-discuss messages from the toolbar -- Have a chat among any REST-discuss toolbar users (currently there are none, but if a lot of people used it...) -- Allow us to send messages to REST-discuss toolbar users -- There's a drop down menu with links to all major sections on the REST-Discuss page -- I added a drop down menu of links to the best articles and most important RFCs and W3C specs (although I might have missed some.) -- And a ticker I think you'll find amusing. :) (You can turn it off if you don't like it.) Again, this thing is suprisingly devoid of anything that would smack of in your face advertising. I think they've got the web 2.0 ethos down. Try it out [2], and let me know what you think. I'm definitely going to keep it installed on my browsers. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ P.S. For the cynics in the audience, I don't benefit from people using this in any way. I didn't include any links to any of my sites and I won't get any affliate revenue if you use this. I just thought it was cool and useful for REST-discuss community. And the company offering this, Conduit, appears to be legit with a lot of traffic on their forum where a lot of people seeming to be positive about them and their answers about their revenue model seem to jive with reality. [1] http://www.conduit.com/ [2] http://restdiscuss.communitytoolbars.com/
S. Mike Dierken wrote: >> now, it is hard for me to see 'start' and 'stop' as values of >> the 'currentactivity' property. if anything, it looks like a >> masquerade of the RPC equivalent of posting to the >> VirtualEnvironment entity with the pair 'action=start'. > What happens if a client sends 'start' twice? In a state transition diagram, > that would be fairly easy to describe the desired behavior. In an 'action can you please elaborate? > oriented' approach, you'd have to special case that to prevent multiple > initializations/etc. > > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
S. Mike Dierken wrote: > How about a '/' character? > GET http://example.org/openQRM/service/abc123/status > You could even use query terms > GET http://example.org/openQRM/service/abc123?property=status > >> and another question, is it ok to model the 'start' and 'stop' actions as > 'action' resources? > Sure - if you want to dynamically define and re-define the implementation of > the 'start' action, like upload some script. (I built a system like that > once, it was very cool.) > But I think this model is headed towards a queue, where you add work > (pointers to entities) to be done. yes. naturally, i would do it by sending a '<start>http://example.org/openQRM/service/abc123</start>' data to the http://example.org/openQRM/actions resource (a queue of actions), or http://example.org/openQRM/service/abc123/actions. but this then seems very RPC > > > >> -----Original Message----- >> From: rest-discuss@yahoogroups.com >> [mailto:rest-discuss@yahoogroups.com] On Behalf Of Ittay Dror >> Sent: Tuesday, January 09, 2007 10:26 PM >> To: rest-discuss@yahoogroups.com >> Subject: Re: [rest-discuss] Re: how to model actions >> >> bad choice of characters. '#' is for fragment identifiers. >> maybe something like: >> http://example.org/openQRM/service/abc123:status >> >> and another question, is it ok to model the 'start' and >> 'stop' actions as 'action' resources? >> >> so, POST to http://example.org/openQRM/actions/start with data >> http://example.org/openQRM/service/abc123 >> >> thanks, >> ittay >> >> Ittay Dror wrote: >>> >>> thanks for your help. >>> >>> one final question, is it ok to model the API for 'status' as >>> http://example. org/openQRM/ service/abc123# status >>> <http://example.org/openQRM/service/abc123#status> (thus >> stating more >>> clearly it is a property of the service, and not a resource >> in itself) >>> thanks, >>> ittay >>> >>> John D. Heintz wrote: >>> > >>> > >>> > Hello Ittay, >>> > >>> > Here's how I would approach the problem by way of small example. >>> I've > used "====..." just to help with the formatting. >>> > >>> > The first thing I would do is break down the name/uri >> space like this: >>> > ============ === >>> > GET http://example. org/openQRM/ service/abc123 > >> <http://example. >>> org/openQRM/ service/abc123 >>> <http://example.org/openQRM/service/abc123>> >>> > ============ === >>> > HTTP/1.1 200 OK >>> > >>> > <?xml version="1.0" encoding="utf- 8"?> > ... >>> > <service name="..." status="started" ... >>> > ============ === >>> > >>> > This indicates that we can get a representation of the >> service > >>> identified by "abc123". >>> > >>> > We could then PUT or POST a modifed representation with > >>> status="stopped" , but I'll take a different approach. >> Let's have the >>>> represenation provide more information via the hypertext: >>> > >>> > ============ === >>> > GET http://example. org/openQRM/ service/abc123 > >> <http://example. >>> org/openQRM/ service/abc123 >>> <http://example.org/openQRM/service/abc123>> >>> > ============ === >>> > HTTP/1.1 200 OK >>> > >>> > <?xml version="1.0" encoding="utf- 8"?> > ... >>> > <service name="..." ... >>> > <status current="started" href="abc123/ status" ... >>> > ============ == >>> > >>> > Now we have marked a new URI that is specially designed for the >>> status > of the "abc123" service. >>> > >>> > From that URI: >>> > ============ === >>> > GET http://example. org/openQRM/ service/abc123/ status > >>> <http://example. org/openQRM/ service/abc123/ status >>> <http://example.org/openQRM/service/abc123/status>> >>> > ============ === >>> > HTTP/1.1 200 OK >>> > >>> > started >>> > ============ === >>> > >>> > Let's say that we want to support a simple PUT on this >> resource to >>>> enable changing the state of a service: >>> > >>> > ============ === >>> > PUT http://example. org/openQRM/ service/abc123/ status > >>> <http://example. org/openQRM/ service/abc123/ status >>> <http://example.org/openQRM/service/abc123/status>> >>> > >>> > stopped >>> > ============ === >>> > HTTP/1.1 200 OK >>> > >>> > stopped >>> > ============ === >>> > >>> > If ours was the only client controlling that service >> this would be >>>> perfectly clear, but other clients (or the service itself) might >>> have > already changed the status. Let's add some details that >>> provide us with > information to do that. >>> > >>> > ============ === >>> > GET http://example. org/openQRM/ service/abc123/ status > >>> <http://example. org/openQRM/ service/abc123/ status >>> <http://example.org/openQRM/service/abc123/status>> >>> > ============ === >>> > HTTP/1.1 200 OK >>> > Last-Modified: Tue, 09 Jan 2007 22:34:37 GMT > ETag: "xyz-789" >>> > >>> > started >>> > ============ === >>> > >>> > The Last-Modified should be clear, the ETag is a >> server-side hashed >>> id > that can encode details like identity and version >> number. In our >>>> subsequent PUT we can indicate to only do the work if the >> resource >>> still > matches what we last saw (optimistic locking). >>> > >>> > ============ === >>> > PUT http://example. org/openQRM/ service/abc123/ status > >>> <http://example. org/openQRM/ service/abc123/ status >>> <http://example.org/openQRM/service/abc123/status>> >>> > If-Modified- Since: Tue, 09 Jan 2007 22:34:37 GMT > > >> stopped > >>> ============ === > HTTP/1.1 200 OK > > stopped > >> ============ === >>>> > Now it's safe for us to assume that "we" are the agent >> that did >>> stop the > service. A failure response would have been returned if >>> not. >>> > >>> > Stuff that I've ignored so far: >>> > * content type: I'm just typing anything for example. >> Should always >>>> prefer standard and interoperable content types. >>> > * content negotiation: browsers and B2B agents may not want the >>> same > representation content types > * Forms: How does >> this agent >>> know it can PUT to that resource? What > values are allowed to be >>> sent? >>> > * probably a bunch more, I hope others jump in to >> correct me ;) > >>>> Hope this helps provide a more clear example of the differences. >>> > >>> > John >>> > >>> > On 1/9/07, *Ittay Dror* < ittayd@qlusters. com > >>> <mailto:ittayd@qlusters. com <mailto:ittayd%40qlusters.com>>> wrote: >>> > >>> > what troubles me is that most examples i find to explain >> rest use >>>> simple cases, where no logic is actually running, and so >> it is easy >>>> to transfer services to properties (since the services do very >>> little) > > so here is my real world use case: openQRM is >> a platform >>> for > provisioning and managing services in a DC. a service is an >>> entity > named VirtualEnvironment that encapsulates both a set of >>> policies on > how to provision and manage the real service and a >>> capture of the > real service's state (resources, >> utilization, error, >>> etc.) > > we can do many actions on a service, but the >> primary ones >>> are to > start and stop it. starting means resources are selected >>> from a pool > and assigned to work for the service, the >> software is >>> deployed on > them and the service is monitored. stopping is the >>> reverse. >>> > obviously openQRM does all this. the client just >> requests to 'start' >>> > or 'stop'. >>> > >>> > now, it is hard for me to see 'start' and 'stop' as >> values of the >>>> 'currentactivity' property. if anything, it looks like a >> masquerade >>>> of the RPC equivalent of posting to the >> VirtualEnvironment entity > >>> with the pair 'action=start' . >>> > >>> > thank you for your help, >>> > ittay >>> > >>> > wahbedahbe wrote: >>> > > >>> > > >>> > > Hmmm. I'm a bit of a newbie too but let me take a stab >> at this as >>> an > > exercise. >>> > > >>> > > Now let me first say that the way you posed the >> question is very >>>>> "objecty" or "servicey" to begin with and makes it a >> bit hard to >>>>> answer. You've basically said "I have a service with >> methods A, > >>> B, and > > C; how do I make it resource based?" The >> problem is that >>> REST has no > > concept of class-specific methods, so >> without knowing >>> what A, B, > and C > > do, its hard to answer your question. In >>> short, there is no straight > > forward transformation of >> an abstract >>> service to abstract resources. >>> > > (At least not that I know of...) They are two very >> different ways >>> of > > modeling the interface. I can't transform one type >> of solution >>> to > > another without understanding the problem. >>> > > >>> > > But I can infer a bit about the problem from the fact you are >>> using > > the concept of a person and things a person can do (eat, >>> sleep, > work). >>> > > I'll assume here that sleep() and work() change the >> state of the >>>>> person (to sleeping and working respectively) . Maybe >> there is a >>>>> substate when working that says what work you are doing. Let's >>> also > > assume that eat() adds some food items to a list >> of things eaten. >>> > > >>> > > So to work (on building a bike shed): >>> > > PUT /person/ittay/ currentactivity > > Content-Type: >>> application/ x-activity+ xml > > > > <activity> > > <type>work</ >>> type> > > <sub-type>building a bike shed</sub-type> > > >> </activity> >>>>> > > To sleep: >>> > > PUT /person/ittay/ currentactivity > > Content-Type: >>> application/ x-activity+ xml > > > > <activity> > > <type>sleep< >>> /type> > > </activity> > > > > To eat an apple: >>> > > POST /person/ittay/ stomach >>> > > Content-type: application/ x-food+xml > > > > <food>apple< >>> /food> > > > > Now these actions could also kick off related >>> business processes. I > > think the only method that can't is GET. >>> Note that you could use get > > to retrieve the current >> activity from >>> /person/ittay/ > currentactivity or > > the list of things eaten >>> from /person/ittay/ stomach. >>> > > >>> > > --- In rest-discuss@ yahoogroups. com > > <mailto: >> rest-discuss% >>> 40yahoogroups. com > <mailto:rest- discuss%40yahoog roups.com> >, >>> Ittay Dror <ittayd@...> >>> wrote: >>> > > > >>> > > > thanks for your example and site reference. >>> > > > >>> > > > i did know that i use only HTTP methods, what i >> lacked is how >>> to > > model my business methods. i am a newbie at this (maybe i >>> should have > > indicated that) > > > > > > may i impose with a >>> followup? >>> > > > >>> > > > say i have a person entity in my application. this >> entity has >>>>> several business methods (eat, sleep, work). how should >> i model a >>>> rest > > api for these? create a PersonManager resource >> (which then >>> will need > > to accept as argument what action to do), or create >>> resources per > > action (/person/ittay/ eat maybe?). >>> > > > >>> > > > thanks, >>> > > > ittay >>> > > > >>> > > > Jan Algermissen wrote: >>> > > > > Hi Ittay, >>> > > > > >>> > > > > the kind of question you ask indicates that you >> have no yet >>>>> understood a fundamental aspect of REST, namely that >> there is a > >>>> uniform interface. You do not get any other methods as the > >>> uniform set > > (the HTTP methods in the case of using HTTP as an >>> architecture) > > > > > > > > Without any intention to be rude, I >>> suggest you take a while to > > read through some of the REST >>> material out on the Web (the REST wiki > > at http://rest >>> <http://rest>. blueoxen. net <http://rest. >>> > blueoxen. net <http://rest. blueoxen. net >>> <http://rest.blueoxen.net>>> is a good > > starting point as is >>> Paul's site > > at http://prescod. com/rest < http://prescod. >>> com/rest > <http://prescod. com/rest >> <http://prescod.com/rest>>> ). >>> This will help > > in future discussions. >>> > > > > >>> > > > > >>> > > > >> Should I use 'POST' with a 'method=build' >> parameter to the >>>> 'plan' >>> > > URI? >>> > > > > >>> > > > > No, this is about the worst from a REST POV as it >> is hidden RPC. >>> > > > > >>> > > > > Think along these lines: >>> > > > > >>> > > > > Provide a means for clients to pick a resource >> that behaves >>> the > > way they are interested in (some hypermedia the >> clients come >>> accross > > could declare a resource as a BuildingManager) >> and then >>> use POST to > > submit the building plan. >>> > > > > >>> > > > > POST /BuildingManager >>> > > > > Content-Type: application/ blueprint <-- this type is > > >>> fictous; it would need to be standardized within the realm >> of > your >>> system > > > > > > > > <blueprint> > > > > <wall >> location=".. .."/> >>>>>>> </blueprint> > > > > > > > > > > > > IOW, in a RESTful >>> system clients late-bind to resources based on > > >> runtime-declared >>> abstract bahavior of those resources and they > > >> communicate via a >>> fixed set of methods. >>> > > > > >>> > > > > HTH, >>> > > > > >>> > > > > Jan >>> > > > > >>> > > > > >>> > > > > >>> > > > > >>> > > > > >>> > > > >> Thanks, >>> > > > >> Ittay >>> > > > >> >>> > > > >> >>> > > > >> >>> > > > >> >>> > > > >> Yahoo! Groups Links >>> > > > >> >>> > > > >> >>> > > > >> >>> > > > >> >>> > > > >> >>> > > > > >>> > > > >>> > > > >>> > > > -- >>> > > > ============ ========= ========= ===== > > > Ittay >> Dror, > > >>>> Chief architect, > > > R&D, Qlusters Inc. >>> > > > ittayd@... >>> > > > +972-3-6081994 Fax: +972-3-6081841 > > > > > > >> www.openqrm. >>> org - Data Center Provisioning > > > > > > > > > > -- > >>> ============ ========= ========= ===== > Ittay Dror, > Chief >>> architect, > R&D, Qlusters Inc. >>> > ittayd@qlusters. com <mailto:ittayd@qlusters. com >>> <mailto:ittayd%40qlusters.com>> > +972-3-6081994 Fax: >> +972-3-6081841 >>>> > www.openqrm. org <http://www.openqrm. org >>> <http://www.openqrm.org>> - Data Center Provisioning > > > > >>> Yahoo! Groups Links > > > (Yahoo! ID required) > > >> mailto:rest- >>> discuss- fullfeatured@ yahoogroups. com > <mailto:rest-discuss- >>> fullfeatured@ yahoogroups. com >>> <mailto:rest-discuss-fullfeatured%40yahoogroups.com>> >>> > >>> > >>> > >>> > >>> > >>> > >>> > -- >>> > John D. Heintz >>> > Principal Consultant >>> > New Aspects of Software >>> > Austin, TX >>> > (512) 633-1198 >>> > >>> >>> -- >>> ============ ========= ========= ===== Ittay Dror, Chief architect, >>> R&D, Qlusters Inc. >>> ittayd@qlusters. com <mailto:ittayd%40qlusters.com> >>> +972-3-6081994 Fax: +972-3-6081841 >>> >>> www.openqrm. org - Data Center Provisioning >>> >>> >> >> -- >> =================================== >> Ittay Dror, >> Chief architect, >> R&D, Qlusters Inc. >> ittayd@... >> +972-3-6081994 Fax: +972-3-6081841 >> >> www.openqrm.org - Data Center Provisioning >> >> >> >> Yahoo! Groups Links >> >> >> > > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
How about a '/' character? GET http://example.org/openQRM/service/abc123/status You could even use query terms GET http://example.org/openQRM/service/abc123?property=status > and another question, is it ok to model the 'start' and 'stop' actions as 'action' resources? Sure - if you want to dynamically define and re-define the implementation of the 'start' action, like upload some script. (I built a system like that once, it was very cool.) But I think this model is headed towards a queue, where you add work (pointers to entities) to be done. > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Ittay Dror > Sent: Tuesday, January 09, 2007 10:26 PM > To: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: how to model actions > > bad choice of characters. '#' is for fragment identifiers. > maybe something like: > http://example.org/openQRM/service/abc123:status > > and another question, is it ok to model the 'start' and > 'stop' actions as 'action' resources? > > so, POST to http://example.org/openQRM/actions/start with data > http://example.org/openQRM/service/abc123 > > thanks, > ittay > > Ittay Dror wrote: > > > > > > thanks for your help. > > > > one final question, is it ok to model the API for 'status' as > > http://example. org/openQRM/ service/abc123# status > > <http://example.org/openQRM/service/abc123#status> (thus > stating more > > clearly it is a property of the service, and not a resource > in itself) > > > > thanks, > > ittay > > > > John D. Heintz wrote: > > > > > > > > > Hello Ittay, > > > > > > Here's how I would approach the problem by way of small example. > > I've > used "====..." just to help with the formatting. > > > > > > The first thing I would do is break down the name/uri > space like this: > > > ============ === > > > GET http://example. org/openQRM/ service/abc123 > > <http://example. > > org/openQRM/ service/abc123 > > <http://example.org/openQRM/service/abc123>> > > > ============ === > > > HTTP/1.1 200 OK > > > > > > <?xml version="1.0" encoding="utf- 8"?> > ... > > > <service name="..." status="started" ... > > > ============ === > > > > > > This indicates that we can get a representation of the > service > > > identified by "abc123". > > > > > > We could then PUT or POST a modifed representation with > > > status="stopped" , but I'll take a different approach. > Let's have the > > > represenation provide more information via the hypertext: > > > > > > ============ === > > > GET http://example. org/openQRM/ service/abc123 > > <http://example. > > org/openQRM/ service/abc123 > > <http://example.org/openQRM/service/abc123>> > > > ============ === > > > HTTP/1.1 200 OK > > > > > > <?xml version="1.0" encoding="utf- 8"?> > ... > > > <service name="..." ... > > > <status current="started" href="abc123/ status" ... > > > ============ == > > > > > > Now we have marked a new URI that is specially designed for the > > status > of the "abc123" service. > > > > > > From that URI: > > > ============ === > > > GET http://example. org/openQRM/ service/abc123/ status > > > <http://example. org/openQRM/ service/abc123/ status > > <http://example.org/openQRM/service/abc123/status>> > > > ============ === > > > HTTP/1.1 200 OK > > > > > > started > > > ============ === > > > > > > Let's say that we want to support a simple PUT on this > resource to > > > enable changing the state of a service: > > > > > > ============ === > > > PUT http://example. org/openQRM/ service/abc123/ status > > > <http://example. org/openQRM/ service/abc123/ status > > <http://example.org/openQRM/service/abc123/status>> > > > > > > stopped > > > ============ === > > > HTTP/1.1 200 OK > > > > > > stopped > > > ============ === > > > > > > If ours was the only client controlling that service > this would be > > > perfectly clear, but other clients (or the service itself) might > > have > already changed the status. Let's add some details that > > provide us with > information to do that. > > > > > > ============ === > > > GET http://example. org/openQRM/ service/abc123/ status > > > <http://example. org/openQRM/ service/abc123/ status > > <http://example.org/openQRM/service/abc123/status>> > > > ============ === > > > HTTP/1.1 200 OK > > > Last-Modified: Tue, 09 Jan 2007 22:34:37 GMT > ETag: "xyz-789" > > > > > > started > > > ============ === > > > > > > The Last-Modified should be clear, the ETag is a > server-side hashed > > id > that can encode details like identity and version > number. In our > > > subsequent PUT we can indicate to only do the work if the > resource > > still > matches what we last saw (optimistic locking). > > > > > > ============ === > > > PUT http://example. org/openQRM/ service/abc123/ status > > > <http://example. org/openQRM/ service/abc123/ status > > <http://example.org/openQRM/service/abc123/status>> > > > If-Modified- Since: Tue, 09 Jan 2007 22:34:37 GMT > > > stopped > > > ============ === > HTTP/1.1 200 OK > > stopped > > ============ === > > > > Now it's safe for us to assume that "we" are the agent > that did > > stop the > service. A failure response would have been returned if > > not. > > > > > > Stuff that I've ignored so far: > > > * content type: I'm just typing anything for example. > Should always > > > prefer standard and interoperable content types. > > > * content negotiation: browsers and B2B agents may not want the > > same > representation content types > * Forms: How does > this agent > > know it can PUT to that resource? What > values are allowed to be > > sent? > > > * probably a bunch more, I hope others jump in to > correct me ;) > > > > Hope this helps provide a more clear example of the differences. > > > > > > John > > > > > > On 1/9/07, *Ittay Dror* < ittayd@qlusters. com > > > <mailto:ittayd@qlusters. com <mailto:ittayd%40qlusters.com>>> wrote: > > > > > > what troubles me is that most examples i find to explain > rest use > > > simple cases, where no logic is actually running, and so > it is easy > > > to transfer services to properties (since the services do very > > little) > > so here is my real world use case: openQRM is > a platform > > for > provisioning and managing services in a DC. a service is an > > entity > named VirtualEnvironment that encapsulates both a set of > > policies on > how to provision and manage the real service and a > > capture of the > real service's state (resources, > utilization, error, > > etc.) > > we can do many actions on a service, but the > primary ones > > are to > start and stop it. starting means resources are selected > > from a pool > and assigned to work for the service, the > software is > > deployed on > them and the service is monitored. stopping is the > > reverse. > > > obviously openQRM does all this. the client just > requests to 'start' > > > or 'stop'. > > > > > > now, it is hard for me to see 'start' and 'stop' as > values of the > > > 'currentactivity' property. if anything, it looks like a > masquerade > > > of the RPC equivalent of posting to the > VirtualEnvironment entity > > > with the pair 'action=start' . > > > > > > thank you for your help, > > > ittay > > > > > > wahbedahbe wrote: > > > > > > > > > > > > Hmmm. I'm a bit of a newbie too but let me take a stab > at this as > > an > > exercise. > > > > > > > > Now let me first say that the way you posed the > question is very > > > > "objecty" or "servicey" to begin with and makes it a > bit hard to > > > > answer. You've basically said "I have a service with > methods A, > > > B, and > > C; how do I make it resource based?" The > problem is that > > REST has no > > concept of class-specific methods, so > without knowing > > what A, B, > and C > > do, its hard to answer your question. In > > short, there is no straight > > forward transformation of > an abstract > > service to abstract resources. > > > > (At least not that I know of...) They are two very > different ways > > of > > modeling the interface. I can't transform one type > of solution > > to > > another without understanding the problem. > > > > > > > > But I can infer a bit about the problem from the fact you are > > using > > the concept of a person and things a person can do (eat, > > sleep, > work). > > > > I'll assume here that sleep() and work() change the > state of the > > > > person (to sleeping and working respectively) . Maybe > there is a > > > > substate when working that says what work you are doing. Let's > > also > > assume that eat() adds some food items to a list > of things eaten. > > > > > > > > So to work (on building a bike shed): > > > > PUT /person/ittay/ currentactivity > > Content-Type: > > application/ x-activity+ xml > > > > <activity> > > <type>work</ > > type> > > <sub-type>building a bike shed</sub-type> > > > </activity> > > > > > > To sleep: > > > > PUT /person/ittay/ currentactivity > > Content-Type: > > application/ x-activity+ xml > > > > <activity> > > <type>sleep< > > /type> > > </activity> > > > > To eat an apple: > > > > POST /person/ittay/ stomach > > > > Content-type: application/ x-food+xml > > > > <food>apple< > > /food> > > > > Now these actions could also kick off related > > business processes. I > > think the only method that can't is GET. > > Note that you could use get > > to retrieve the current > activity from > > /person/ittay/ > currentactivity or > > the list of things eaten > > from /person/ittay/ stomach. > > > > > > > > --- In rest-discuss@ yahoogroups. com > > <mailto: > rest-discuss% > > 40yahoogroups. com > <mailto:rest- discuss%40yahoog roups.com> >, > > Ittay Dror <ittayd@...> > > wrote: > > > > > > > > > > thanks for your example and site reference. > > > > > > > > > > i did know that i use only HTTP methods, what i > lacked is how > > to > > model my business methods. i am a newbie at this (maybe i > > should have > > indicated that) > > > > > > may i impose with a > > followup? > > > > > > > > > > say i have a person entity in my application. this > entity has > > > > several business methods (eat, sleep, work). how should > i model a > > > rest > > api for these? create a PersonManager resource > (which then > > will need > > to accept as argument what action to do), or create > > resources per > > action (/person/ittay/ eat maybe?). > > > > > > > > > > thanks, > > > > > ittay > > > > > > > > > > Jan Algermissen wrote: > > > > > > Hi Ittay, > > > > > > > > > > > > the kind of question you ask indicates that you > have no yet > > > > understood a fundamental aspect of REST, namely that > there is a > > > > uniform interface. You do not get any other methods as the > > > uniform set > > (the HTTP methods in the case of using HTTP as an > > architecture) > > > > > > > > Without any intention to be rude, I > > suggest you take a while to > > read through some of the REST > > material out on the Web (the REST wiki > > at http://rest > > <http://rest>. blueoxen. net <http://rest. > > > blueoxen. net <http://rest. blueoxen. net > > <http://rest.blueoxen.net>>> is a good > > starting point as is > > Paul's site > > at http://prescod. com/rest < http://prescod. > > com/rest > <http://prescod. com/rest > <http://prescod.com/rest>>> ). > > This will help > > in future discussions. > > > > > > > > > > > > > > > > > >> Should I use 'POST' with a 'method=build' > parameter to the > > > 'plan' > > > > URI? > > > > > > > > > > > > No, this is about the worst from a REST POV as it > is hidden RPC. > > > > > > > > > > > > Think along these lines: > > > > > > > > > > > > Provide a means for clients to pick a resource > that behaves > > the > > way they are interested in (some hypermedia the > clients come > > accross > > could declare a resource as a BuildingManager) > and then > > use POST to > > submit the building plan. > > > > > > > > > > > > POST /BuildingManager > > > > > > Content-Type: application/ blueprint <-- this type is > > > > fictous; it would need to be standardized within the realm > of > your > > system > > > > > > > > <blueprint> > > > > <wall > location=".. .."/> > > > > > > </blueprint> > > > > > > > > > > > > IOW, in a RESTful > > system clients late-bind to resources based on > > > runtime-declared > > abstract bahavior of those resources and they > > > communicate via a > > fixed set of methods. > > > > > > > > > > > > HTH, > > > > > > > > > > > > Jan > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >> Thanks, > > > > > >> Ittay > > > > > >> > > > > > >> > > > > > >> > > > > > >> > > > > > >> Yahoo! Groups Links > > > > > >> > > > > > >> > > > > > >> > > > > > >> > > > > > >> > > > > > > > > > > > > > > > > > > > > > -- > > > > > ============ ========= ========= ===== > > > Ittay > Dror, > > > > > Chief architect, > > > R&D, Qlusters Inc. > > > > > ittayd@... > > > > > +972-3-6081994 Fax: +972-3-6081841 > > > > > > > www.openqrm. > > org - Data Center Provisioning > > > > > > > > > > -- > > > ============ ========= ========= ===== > Ittay Dror, > Chief > > architect, > R&D, Qlusters Inc. > > > ittayd@qlusters. com <mailto:ittayd@qlusters. com > > <mailto:ittayd%40qlusters.com>> > +972-3-6081994 Fax: > +972-3-6081841 > > > > www.openqrm. org <http://www.openqrm. org > > <http://www.openqrm.org>> - Data Center Provisioning > > > > > > Yahoo! Groups Links > > > (Yahoo! ID required) > > > mailto:rest- > > discuss- fullfeatured@ yahoogroups. com > <mailto:rest-discuss- > > fullfeatured@ yahoogroups. com > > <mailto:rest-discuss-fullfeatured%40yahoogroups.com>> > > > > > > > > > > > > > > > > > > > > > -- > > > John D. Heintz > > > Principal Consultant > > > New Aspects of Software > > > Austin, TX > > > (512) 633-1198 > > > > > > > -- > > ============ ========= ========= ===== Ittay Dror, Chief architect, > > R&D, Qlusters Inc. > > ittayd@qlusters. com <mailto:ittayd%40qlusters.com> > > +972-3-6081994 Fax: +972-3-6081841 > > > > www.openqrm. org - Data Center Provisioning > > > > > > > -- > =================================== > Ittay Dror, > Chief architect, > R&D, Qlusters Inc. > ittayd@... > +972-3-6081994 Fax: +972-3-6081841 > > www.openqrm.org - Data Center Provisioning > > > > Yahoo! Groups Links > > >
On 1/10/07, Bill Venners <bv-svp@...> wrote: > > What do you mean by "in theory structured path segments allow more > versatile delegation of authority?" URIs are hierarchical. The "/" character is a delimiter. For example, examine which URLs automatically receive HTTP Basic auth credentials after an initial 401 response from http://example.com/foo/bar/baz vs. http://example.com/foo/bar;baz > Also, I personally am not sure whether semicolon separated params > would be prettier or uglier than traditional query params. Pretty paths are mapped to ugly query parameters, so there is no need to dwell on the relative merits. To use the first example from http://routes.groovie.org/manual.html#route-path http://example.com/myapp/feeds/electronics/atom.xml maps to http://example.com/myapp?controller=feeds&category=electronics&action=atom&type=xml Of course, that is just one concrete example. It turns out that the commonly used parts of URI syntax are flexible enough to accommodate a rule-based mini language for routing. The rest of it hasn't been necessary yet, aside from implementation-specific uses. So, who cares if Tomcat breaks the semicolon? You can always use something less broken. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Jon Hanna wrote: > Mike Schinkel wrote: > > Well, I want to explore why it is harder for search engines to find > > that page. And can we change that? Or consider the problem from a > > different perspective? > > This is simple. The search engine finds x pages which it > deems equally good as a response to a given query. Which does > it link to first? There's no way of saying. You didn't answer "can we change that?" > > Google tweaks their algorithms quarterly. Why could they > not include an update given new guidance? The rest would > follow. > > We can already give this guidance to Google. Google reacts > appropriately when it receives a 301 Moved Permanently by > treating the target of the location header as the URI to use. That is much like we are in a city planning meeting where I say "People need a bus to Harold's Cross" to which you say "Tell them to take a train" to which I say "There isn't a train that goes to Harold's Cross" to which you reply "Tell them to take a train" ... The point is giving Google a 301 requires the page to be redirected when I explicitly stated that redirection is not appropriate for the use-case and in that use-case redirection would confuse users. One option might be to return a 203 Non-Authoritative Information with Content-Location pointing to a (more) canonical URL, but I'm certainly not the expert on the uses of return code and http headers so I don't know if this would be misusing that information or not. Can anyone tell me if this would be a viable use to indicate a canonical URL w/o redirecting the client to the canonical URL? > > It is also a caching minus, because the cache won't > > know it already has something that came via a different > > URI. That can be a usability minus in the form of slower > > perceived response time, and a business minus in terms > > of higher bandwidth costs. > > > > Again, I'd like to explore why that is and see if there are > > not new ways to look at the problem. > > The same reason you don't know whether what's behind > door number 1 is the same as what's behind door number 2. That may be why, but it doesn't explore new ways of looking at the problem. > Again, but a note behind door number 2 saying "go to door > number 1 and open that instead" and your problem is solved. > As a bonus we can also control how often we check door number > 2 to see if that note is still there. Again, you are giving me licorice ice cream after I told you I don't like licorice simply because all you have is licorice. It doesn't get me ice cream that I can stand to eat. > Mike Schinkel wrote: > > Also, I just ran across this: > > http://www.w3.org/QA/2004/08/readable-uri > > Being treated as opaque does not mean it can't be readable or > respond well to guesswork. I didn't say it did. > Mike Schinkel wrote: > > For one example, look at Wikipedia's URLs; is it a wonder there are > > linked as often at they are? > > > > http://en.wikipedia.org/wiki/REST > > > > IF it were instead > > > http://www.wikipedia.org/topic.php?topicID=7937521&lang=en-US& > > source-ie7&en-US&ie=utf8&oe=utf8 > > > > I can guarantee you it would not be linked as much. > > If one 301'd to the other then the benefits you suggest are > there would still hold. Not at all, you are thinking of different use-cases than I. Consider someone is writing a topc on their blog and use the terms "REST", "HTTP", "HTML", "URL", "URI", "AJAX", "Javascript", etc. They want to hyperlink those terms to definitions. Given Wikipedia's clean and clear URL structure, they know that they can link to "http://en.wikipedia.org/wiki/REST", "http://en.wikipedia.org/wiki/HTTP", etc. and so they do it. But let's assume that Wikipedia only offered the "ugly" URL structure I described above. The author would often think to themselves "Damn, it's just too much trouble to link to all those terms. So I'm just not going to. Let the user Google them themselves if they want to know what they mean." And in that case their post has less value for readers and adds less to the web overall. Of course you might say "They can just hyperlink to Google with 'http://www.google.com/search?q=REST', etc.", but then if Google's URL structure were not easy to grok, that wouldn't be possible without a lot more work either, so if you suggested that you'd just be helping me make my point. > > And when people link a site, the benefits acrue to the site. > > There are more benefits in people linking to one page on your > site than 15 equivalent pages. Currently yes, but I'm not talking about the current state of the web, I'm talking about an exploration for improving it. Many people told TimBL that the concept of the web itself made no sense, but that didn't stop him. I'm not suggesting my ideas are on par with Tim's; but I am suggesting that impeding the exploration of improvements will increase the likelihood that no improvements will be seen. As an aside, it never ceases to amaze me how many people are willing to proactively debate away attempts to improve the web and make it more usable... > > So you see, there as so many small reasons why > > well designed URLs are a benefit. But if you don't > > drill down to see those ways, it doesn't seem > > like much at 50,000 feet. > > The benefits of well-designed URIs is a different matter to > the benefits of canonical URIs. I agree, but I'd like to explore if it is possible that they are not at odds either. > > 5.) Further, the URI Opacity concern is related to *machines*, not > > humans because humans have an error correcting mechanism called > > intelligence. If they go to the "/cart/" URL and it displays > > information about wheelbarrows instead of a shopping cart, > > the human can figure it out and continue looking. The machine is > > yet to be capable until programmed in advance for that problem. > > No, the opacity concern is related to humans because if they > phone and complain that they got information about > wheelbarrows when they should have gotten a shopping cart I'm > going to want to hit someone (not really, I get far worse > calls than that, but they do make me want to hit someone). > This is because humans have an error-generating mechanism > called stupidity. Do you really think your fellow humans are less intelligent then you? I think you are just voicing the frustation of someone who has been on the support line for a poorly designed system. On the contrary, one of the reasons humans phone and complain that they got information about wheelbarrows when they should have gotten a shopping cart is because the designer of the shopping cart decided that making the URLs understandable by humans was either not worth his effort or believed that users couldn't possibly understand URLs, which is a very arrogant position to take. People didn't understand automobile controls, postal codes, telephone numbers, automated teller machines, and more when they were first introduced, but now the vast majority of people get on fine with them. The same will be true of URLs as time passes. > Responding reasonably to guesswork is great. I strive to > design URI schemes as intuitively as I can. But I will delete > any bug report about guessed URIs not working ... And I would agree that that is an appropriate position. OTOH, if I learned that people frequently entered "http://www.mysite.com/buynow/" and got a broken link, I can tell you I'd go to the effort to ensure they got what they were seeking, and all pedantics aside, I bet you would too. ;-) > unless its very > definitely marked as a suggested enhancement. However, I didn't understand what you meant by this. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/
Walden Mathews wrote: > : For one example, look at Wikipedia's URLs; is it a wonder > there are linked > : as often at they are? > : > : http://en.wikipedia.org/wiki/REST > : > : IF it were instead > : > : > http://www.wikipedia.org/topic.php?topicID=7937521&lang=en-US& source-ie7&en- > : US&ie=utf8&oe=utf8 > : > : I can guarantee you it would not be linked as much. I > could remember and > : compose the former, there is no way I could remember the > latter (FYI, I > : frequently like to Wikipedia just by prefixing a term with > : "www.wikipedia.org/wiki/") > > 1. I can tell you are in sales. Why the ad-hominem response? Anyway, we are all in sales. Parents sell their children on eating vegetables, scientists sell sponsors on funding their research and colleagues on supporting their findings, standards participants on agreeing their to positions on a proposed standard, job interviewees on themselves, etc. Anyone who doesn't believe there are in sales are just fooling themselves and are most likely less effective then they could otherwise be. That said, "Have I every really been in sales?" Well, I've never held a job with the word "sales" in the title. I've never received a paycheck that included any sales commissions. But have a ever sold something that a company I was running needed me to sell? Absolutely. > 2. You make assumptions I don't: have you heard of cut and paste? Actually, your assumptions are broader than mine; you are assuming there is only one use-case when there are many. See my reply to Jon Hanna [1] with specifics of the use case I was describing. > : If I'm at a party and someone asked me about some pictures > I took, I can > : just tell them to go to: > : > : http://www.flickr.com/photos/mikeschinkel > : > > I guess if they can remember that, several hours and several > beers later, then they can probably also remember your name. > That's better than I can do. Not at all. All they need to do is be able to read my writing on the back of my business card. But that's only because Flickr has a URL that I can remember; if it didn't I wouldn't have been able to remember it in order to write in on the back of my business card. BTW, this is not hypothetical; I have given out this URL when I've told friends that I had pictures they were interested in. > Typically, I email or IM links to people. As far as typing > into the location bar of the browser, I avoid it like the plague. Unfortunately, most of the web-using world's population isn't tethered 100% of the time to a machine capable of sending email. > : I share an appreciation for motorcycles with my dad. > : I like to send him links in email. Let's assume the link > : breaks. Which one is he likely to be able to fix? (and > : which one is least likely to break?) > > Breaks? You mean line breaks? Yes, I agree with > *short* URI in principle. Yes, sorry for my typo. > : But if they were this instead this, it would be much > : easier for dad to identify and fix those broken links > : (and much less likely they'd break): > > I feel bad for your dad Don't be, he is happy that his son shares one of his interests and that his son communicates with him so often. Many fathers could only wish for the same. > in this case, and if I were he, I'd stick to fixing > motorcycles, because it would entail much > less futile typing. That's a rather arrogant response, and is not appreciated. > Honestly, Mike, show your dad how to cut and paste broken (line > wrapped) URI back together. I have shown him. But I can't show the untold millions of other people's fathers, or mothers, how to do the same. > : Now, let's say that I wanted to send him a link to look at the 50th > : Anniversary Sportster. See the link below? Tell me what I > should send him. > : Go head. Open it up. And tell me what link. > > I'm afraid.... As you should have been. :-) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..." [1] http://tech.groups.yahoo.com/group/rest-discuss/message/7549
Nic James Ferrier wrote: > > I made an assumption in my comments which you > > ferreted out; that yes I was talking about defaults > > where users of proxies and firewalls who barely have > > the skills (or the money) to get them up and running, > > let alone optimize them. With many organizations, > > even large one, use the same "Jell-O" model [1] for > > infrastructure configuration as some companies use > > for shipping software; when it stops quivering it's > > done! > > > > OTOH, Nic was speaking about the large organizations > > where one departments often didn't care about the > > concerns of others. > > I think Elliotte is correct that we could fix the problem by > getting the proxy makers to change their proxies. > > However, how long would it take to fix the problem. The big > organization that I was referring to had (amongst others) a > Novell Netware proxy server. It was at least 10 years old. > > I recently made a trip to a medium sized company who were > still using Microsoft Proxy Server 1.0. I don't even want to > think about how old that is. Damn! I knew I forgot to address that one point. :) But no need, as your comments covered it perfectly; we can fix it for new equipment but installed equipment will take years if not over a decade to rotate out of use... > > Indeed all you may need to do when a customer tells > > you your system seems broken is say, "Oh, you're using > > proxy X? That's broken and non-spec compliant. Use > > proxy Y instead and all will be fine." > > > > That's fine, assuming that information gets to the right > > person in the company (which is a huge assumption) and > > that the right person doesn't have other tasks they view > > as higher priority (which is another big assumption.) > > I agree Mike. It's often really difficult to find out what is > actually causing the problem. Unless you have network access > to the client site in question it's almost impossible. > > And Elliotte's assertion doesn't work at all if we're talking > about a web 2.0 business like digg or flikr. If those sites > used PUT and it failed for everyone nehind a crappy proxy > just how many of those failures would get reported to them? That's a great point I hadn't thought to make; if a company is selling expensive services (something like a Salesforce.com), customers will fix problems for them. But users will just give up on free high volume web 2.0 sites like digg and flickr w/o even letting them know... > > Unfortunately, only people who are drawn to the technology for > > personal interest see this as important enough to allow it > > to affect other factors. For another perspective on this, ask > > the WHATWG editor Ian Hickson if > > he thinks browser vendors will support standards that could > > potentially minimize their marketshare in any way.[2] Almost no > > company will be willing to limit it's marketshare for a technical > > ideal. The way to achieve the technical ideal on as broad basis as > > possible is to make that support painless, not by ignoring market > > realities with the claim that "it is the right thing to do > > for the greater good." > > Spec compliance happens in the end. It just takes a long time. Agreed. But you and are both agree that business as got to be done in the mean time. > Of course, it would help if broken and rubbish > implementations didn't get so widespread in the first place. > But that's to do with the ridiculous amounts of hype that > drives our industry. hehe. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..."
Mark Baker wrote:
> On 1/9/07, Mike Schinkel <mikeschinkel@...> wrote:
> > However, the reason I was asking was because I was trying
> > to gather evidence to present to Ian Hickson to support adding
> > URI Template support to the action method of the form element
> > in Web Forms 2.0. I sent an email to uri@... to that
> > effect[1], but got to response.
>
> Though in theory that sounds like a great idea, it would
> break existing clients which is a non-starter when defining
> HTML extensions.
Excellent point! I hadn't considered that. Duh.
> You'd need a new parameter to get
> around that problem.
So by "parameter" do you mean attribute? For example, would this be
workable where newer browsers would use "template" instead of "action?"
<form method="get"
action="http://www.welldesignedurls.org/query">
template="http://www.welldesignedurls.org/{category}/{era}/">
<p>Where would you like to go?
<select name="category">
<option value="standards">Standards</option>
<option value="proposals">Proposals</option>
<option value="ideas">Ideas</option>
</select>/<select name="era">
<option value="past">Past</option>
<option value="present">Present</option>
<option value="future">Future</option>
</select>
<button type="submit">Go!</button>
</p>
</form>
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org/
"It never ceases to amaze how many people will proactively debate away
attempts to make the web more usable..."
Hi, the software I'm working on has a GET/PUT-interface for its configuration. GETting answers an XML document of the active configuration, PUTting a new one causes the server to reconfigure its mode of operation. I like the model and I hope adminstrators will like the curl-vi-curlMinusT cycle. However, does someone know of a browser plugin that could make this even nicer? Something that offers an edit button and an option to PUT the edited document? All my online searches have been to no avail. Thanks Matthias -- Matthias Ernst Software Architect tel +49.40.32 55 87.503 fax +49.40.32 55 87.999 matthias.ernst@...
Mike Schinkel wrote: > I believe that the market doesn't care about that level of detail, only > academics and standard committees care to that extent. I believe the market > cares about what works for them to achieve their specific goals. I understand what you're saying, but I don't think you got me. In this case it is my contention that following the specifications leads to greater efficiency and greater profits. Companies so incompetent that they cannot manage a firewall or proxy server will be beaten by companies that can. Precisely because the market prefers competent, profitable, efficient companies, the pointy-haired will be weeded out. We're not advocating following the spec for the spec's sake. We're advocating following the spec because it works better than not following it. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Elliotte Harold wrote: > Mike Schinkel wrote: > > I believe that the market doesn't care about that level of detail, > > only academics and standard committees care to that extent. > > I believe the market cares about what works for them to > > achieve their specific goals. > > I understand what you're saying, but I don't think you got > me. In this case it is my contention that following the > specifications leads to greater efficiency and greater > profits. Companies so incompetent that they cannot manage a > firewall or proxy server will be beaten by companies that > can. Precisely because the market prefers competent, > profitable, efficient companies, the pointy-haired will be weeded out. > > We're not advocating following the spec for the spec's sake. > We're advocating following the spec because it works better > than not following it. I generally agree with you in the long term, but not in the short term. The market won't reward many companies fast enough, so we have the chicken & egg problem. If they did reward them quickly, we'd have no problem with people not following standards, right? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..." P.S. I just read your article [1] and was very excited by it. I think you will like a concept I'm working on when I am finally ready to publish on it... [1] http://cafe.elharo.com/web/rest-is-like-quantum-mechanics/
Ernst, Matthias schrieb: > > > Hi, > > the software I'm working on has a GET/PUT-interface for its > configuration. GETting answers an XML document of the active > configuration, PUTting a new one causes the server to reconfigure its > mode of operation. > > I like the model and I hope adminstrators will like the > curl-vi-curlMinusT cycle. > > However, does someone know of a browser plugin that could make this even > nicer? Something that offers an edit button and an option to PUT the > edited document? All my online searches have been to no avail. > > Thanks > Matthias Some time ago I was thinking about making a Firefox extension for that (+ some WebDAV features). Back then, I couldn't manage to retrieve the binary content using the XmlHttpRequest object, so I gave up. Best regards, Julian
Amaya has PUT support. On 1/10/07, Ernst, Matthias <matthias.ernst@...> wrote: > Hi, > > the software I'm working on has a GET/PUT-interface for its > configuration. GETting answers an XML document of the active > configuration, PUTting a new one causes the server to reconfigure its > mode of operation. > > I like the model and I hope adminstrators will like the > curl-vi-curlMinusT cycle. > > However, does someone know of a browser plugin that could make this even > nicer? Something that offers an edit button and an option to PUT the > edited document? All my online searches have been to no avail. > > Thanks > Matthias > > -- > Matthias Ernst > Software Architect > > tel +49.40.32 55 87.503 > fax +49.40.32 55 87.999 > matthias.ernst@... > > > > Yahoo! Groups Links > > > > -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Mike Schinkel wrote: > I generally agree with you in the long term, but not in the short term. The > market won't reward many companies fast enough, so we have the chicken & egg > problem. If they did reward them quickly, we'd have no problem with people > not following standards, right? Markets are not perfectly efficient. There's always friction. It takes time for the bad actors to be weeded out, but the time does pass. When I started working with HTML it seemed like forever before people could use forms reliably. When I started working with CSS, it seemed like anything beyond a font tag would never be possible. When I started working with XML, I despaired of using XSLT on public facing web sites. All these are now reasonably reliable today; even if most developers still assume it's 1999 and that none of these things work. (Well, maybe not forms. :-) ) Funny how time passes faster as I get older. :-) -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Bill Venners wrote: > In observing my own behavior while using other people's web > sites, I noticed I occasionally find myself hacking off > pieces of a URI in the hopes of finding something > conceptually higher up in the hierarchy. > It usually didn't work, but I tried. I did not add things to > URIs, but I did try to subtract things. I want URI hacking > that to always work at the websites whose URIs and > information architecture I design. Where I depart from Mike's > approach is that I would still pick one hierarchy from his > many possibilities, and have one canonical URI for each resource. I'm starting to come to the conclusion that the resources I think are the same are actually different. Yes, they may have mostly the same content but they have different breadcrumbs which mean one can't be cached and served for the other. Of course someone might argue that because the content on the different pages is so similar I should not create different resources; I should only have one. But I think that would be being pedantic in trying to optimize the caching benefit in exchange to the detriment of usability. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..."
> Webarch [1] is pretty clear on the topic, though. Agents > should not infer properties from the URI, and "agents" > means both people and machines. However, metaDataInURI-31 [2] which was finalized more recently than WebArch[1] and was also referenced in WebArch is also very clear on the topic: 2.2 Guessing information from a URI ...the ability to explore the Web informally and experimentally is very valuable, and Web users act on such guesses about URIs all the time. Many authorities facilitate such flexible use of the Web by assigning URIs in an orderly and predictable manner. Nonetheless, in the example above, Bob is responsible for determining whether the information returned is indeed what he needs. Is it possible that we have an ingrained view of URI Opacity that is struggling with its evolvotion towards a more realistic and open-minded stance? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..." [1] http://www.w3.org/TR/webarch/#uri-opacity [2] http://www.w3.org/2001/tag/doc/metaDataInURI-31#guessing
Chuck Hinson said: > And if I understand Roy[1] correctly, the only constraint is > that agents should not INFER properties from the URI. As > long as things are explicitly defined (by spec, by the server, > or otherwise), agents can and do make use of information > embedded in the URI. THANKS for the link. I've been trying to locate it for a while. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..."
[ Attachment content not displayed ]
Jon Hanna wrote: > It's a foolish user though that can't tell the difference > between a guess they tried and "the way things are > meant to be". Can you please clarify your context for "the way things are meant to be?" I don't follow. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..."
Walden Mathews wrote: > 1. You train your user to use the system at full leverage. Can you clarify what you mean by "full leverage." > 2. You retain encapsulation of your implementation, > and are then free to change it without breaking your > clients. Giving users the ability to manually hack or guess URLs does not constrain you from changing in the future (although, "Cool URIs don't change"...) Users who hack or guess URLs in a manual mode do so almost exclusively right after retrieving an existing resource. And if a URL changes that they used to go to, they are human, they will adjust. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..."
Bill Venners wrote: > A agree he should present the information architecture > through hypertext, but it need not be "instead." It can > be in addition to presenting it in the URI. Like it or not, > both the hypertext and the URI are part of the user > interface of a web application. That is a key and vital point, and definitely worth repeating. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..."
Mike, In wikis by http://www.jot.com/ (recently acquired by Google), each page has one and only one canonical URL. However, users can construct almost any path to get to the canonical URL, and if all the pages along the path exist, the wiki will show them the page they want. So in this case, canonical URLs work very nicely with user URL-hacking. Is that something like what you have in mind?
Mike Schinkel wrote: > Jon Hanna wrote: >> It's a foolish user though that can't tell the difference >> between a guess they tried and "the way things are >> meant to be". > > Can you please clarify your context for "the way things are meant to be?" I > don't follow. If I'm at http://www.examplenet/a/b/c and go to http://www.examplenet/a/b/ and receive a 403 or anything else I don't assume the application is buggy.
Hi Mike, On Jan 10, 2007, at 6:27 AM, Mike Schinkel wrote: > Bill Venners wrote: >> In observing my own behavior while using other people's web >> sites, I noticed I occasionally find myself hacking off >> pieces of a URI in the hopes of finding something >> conceptually higher up in the hierarchy. >> It usually didn't work, but I tried. I did not add things to >> URIs, but I did try to subtract things. I want URI hacking >> that to always work at the websites whose URIs and >> information architecture I design. Where I depart from Mike's >> approach is that I would still pick one hierarchy from his >> many possibilities, and have one canonical URI for each resource. > > I'm starting to come to the conclusion that the resources I think > are the > same are actually different. Yes, they may have mostly the same > content but > they have different breadcrumbs which mean one can't be cached and > served > for the other. Of course someone might argue that because the > content on the > different pages is so similar I should not create different > resources; I > should only have one. But I think that would be being pedantic in > trying to > optimize the caching benefit in exchange to the detriment of > usability. > That's a good point. Caching is valuable too from a usability perspective and a business one, because it will make the site seem faster to users. I can think of a few options in that case. One is to let go of the breadcrumbs. Users can use the back button to back up. On Firefox if I hold down the back button I get essentially breadcrumbs of page titles, so you could consider dropping them from the page itself to enable you to have a canonical URI for each concept. But breadcrumbs can be nice too, and if you don't want to drop them, another thing you could do since you probably only have a few variations of breadcrumbs is try to have a canonical URI and send a different ETag for each variation. I don't know that this would work, because you'd have to figure out what breadcrumbs to show on that canonical URI based on which URI they came from. I'm not sure how consistently browsers will indicate the URI from which they were redirected in the canonical URI request. And even if that does work well enough, you don't get as good caching because proxies can't do content negotiation for you. So the proxies would have to ask the server each time if the appropriate representation is one of the ones it has cached. If so, your server wouldn't have to send the data to the proxy. But that's the only real speedup, and it isn't as good caching as having different URIs for each. Another thing you could do is only redirect to a canonical URI for selected search engine robots. Search engines warn you not to do this. You aren't supposed to do something different for a search engine robot than for non-search engine robots. But they are mainly concerned about you having a page about poker that when a search engine robot shows up, you switch to a page about the representative of your district. Canonicalizing URIs would help search engines provide better results. So in this approach you get good caching and good search results, but at the risk of getting penalized for breaking the search engine's guidelines by being removed entirely from their index. Or, you could just not have canonical URIs for each concept and accept the likelihood of reduced traffic from search engines. I wouldn't do that in my situation, but all design decisions depend on context. Maybe in your case being found via search engines isn't as important as being super cache-friendly and having breadcrumbs. You have to decide what tradeoffs to make, and it isn't easy. I have struggled a lot with balancing these things due to variation on pages coming from customization based on whether a user is logged in or not, and if so, who they are. One of the rules of thumb is to minimize variation as much as possible, and that's one reason I never show breadcrumbs on pages. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
On Mon, 2007-01-08 at 18:36 -0800, Roy T. Fielding wrote: > On Jan 8, 2007, at 3:58 AM, Benjamin Carlyle wrote: > > I see identity as the rock of vocabulary that allows unambigous > > conversations to be had about what resources are and how we should > > interact with them. You can then move on to definitions of resources > > that help express the meaning of the classic HTTP methods: > ... > > This set of definitions defines the resource concept in a different > > way > > to REST, and I think forms a subset of theoretical REST that can form > > the basis of discussion about which methods a particular architecture > > should have. It lies somewhere between the real web and REST theory > > as a > > practical bridge of good web style. ... > The problem with your model is that it doesn't respect reality. > By axiomatizing away the notion of resource equivalence you simplify > the model, but then your model is incapable of explaining the > information theoretic properties of the Web that I just finished > describing. In your model, URI aliases are not an issue because > they don't exist. In the REST model, URI aliases are an issue > because they reduce the perceived importance of a given resource > and reduce the efficiency of caching resource representations. Perhaps I have miscommunicated. Leaving aside the definition of resources for the moment, what I have said is that at time t in good REST style a url demarcates/selects state as well as defining a potentially bidirectonal mapping between representations of various types and its state. That state can overlap with the state of other resources. I think that when the state overlap is perfect (and perhaps the mapping is perfect also) this is what you mean by a URI alias. However this hides a more general problem of overlapping state. We have overlap when a blog front page shows ten articles, each of which are accessible as a permantent link. Rather than duplicating content, it would be more scalable to employ a technology such as HInclude[1] to construct this page on the client side as required. Each individual page can benefit from caching, and the template that sews them together would be smaller. This is a related problem to that of a true alias, and I suggest one that is is clearer when we talk about urls that demarcate related state than when we talk about whether urls refer to the same resource. > So, the question then isn't how you might define resources. The > question is what do you intend to accomplish by doing so? There > are many ways to look at any given system, particularly when > focusing on only one of the components. An abstraction like REST > is supposed to help the designer identify mismatches in the > architecture. Perhaps you don't see the problem because you > aren't applying your model to the Web as a whole, but rather > something more limited (such as a server-side development > framework)? The purpose of defining a resource to be a particular thing depends on what you will use that definition for. I don't think the "resource is a url is a resource" definition is simply meaningful at the server side. The server has concepts of resource equivalence for its purposes, just like anyone else in the architecture. If the server serves the same representations for two urls it may think of them as the same resource. However a proxy in the middle or a web browser isn't afforded the same luxury. I think that at the machine level the resource and url concepts can't be separated. Any particular point in the architecture may place particular urls into equivalence classes, but those classes will be different throughout the architecuture. At the machine level we can only talk about the 1:1 url<->resource mapping. If we talk about resources being the same a the human level we can work with some latitude. However, the view is still a context-sensitive one. Whether two urls refer to the same resource or not may depend on language, regional, or other considerations. Apart from the scheme-specific equivalence, reordering of query parmeters or matrix url parameters, the very fact that multiple URLs are provided for a resource is usually an indiciation that the identifications are not equivalent. My interest in defining a resource is from a couple of perspectives: * REST enthusiasts consistently get an inconsistent picture of what a resource is. A central concept of an architectural style should be conceptually simple and consistent, but we get into arguments about whether resources are the same when we haven't clarified the "from which perspective" question. url as identity is an impartial way of talking about this concept. * REST is not only GET. The dissertation talks about a resource as a mapping to a set of values. This is fine for GET but doesn't provide a framework under which we can assess the value of other methods. If instead of talking about a resource as "what you see" we can talk about a resource is "demarcating state and mapping to representationt types, defining what you see and what it understands" we can then talk about what a PUT might do. At time t we expect that a PUT will operate on the same state that a GET would retrieve. Without including the definition of the demarcated state as part of the resource we don't really have the grounding to talk about PUT or other mutation methods. So when people start arguing about whether two resources are the same or not I am prone to step in and say that no two resources are the same, though they may demarcate the same state. We can then talk generally about resources that have overlapping state and what we should do about them. From this grounding I can also say that PUT replaces this state, POST adds to it, and DELETE PUTs the null state... but that business logic may step in to leave things in states we don't expect, or change state we didn't anticipate. On Mon, 2007-01-08 at 09:45 -0500, Walden Mathews wrote: > Let's say you have a static page you intend to host forever, and > you have one and only one representation you send for that page. > You don't honor POST, PUT or DELETE (or any other unsafe > method that may appear someday). But, for reasons we don't > care about right now, you support two URL's for that page: > http://benjamin.com/thepage and http://benjamin.com/page1 > Clearly you have two identifiers here. But are you willing to > allow that they simply identify the same resource, and so there are > not two resources, just one? If not, why? Maybe I'll run out of money someday or die, and someone else will take over the domain. No machine in the architecture can infer that these are different resources to themselves any more than a machine in the architecture can infer that these are the same resource. All of the old hyperlinks will be intact... but the server may now treat them differently. Maybe they are two different files in the filesystem and one becomes corrupt but the other doesn't. How sure do you need to be that for all time they return the same values to call them the same? On Mon, 2007-01-08 at 12:36 +0000, Jon Hanna wrote: > Benjamin Carlyle wrote: > > I see this suggestion as a practical breakdown of the system. No node in > > the system can sustainably claim equivalence of two urls except as > > defined by scheme without adding "for my purposes". > Sure it can. > Just like I can say that the identifiers "Norma Jean Mortenson" and > "Marilyn Monroe" identify the same person. ... for the purposes of identifying a natural individual. Now, for the purposes of talking about the movie actress vs her private life we might have to rethink things. We can talk about Marilyn Monroe outliving Norma Jean. Are they still the same resource? The very fact that we constructed two identifiers suggests that the two will not always be equivalent, certainaly not for all purposes. So same, different? It depends on the purpose or intent of the entity who makes the claim of equivalence. Should the identification of which resources are the same depend on subjective perspectives? I would rather not for such a central concept to the architectural style... but that is a personal perspective. Benjamin [1] http://www.mnot.net/javascript/hinclude/
Hi Robert, On Jan 9, 2007, at 11:42 PM, Robert Sayre wrote: > On 1/10/07, Bill Venners <bv-svp@...> wrote: >> >> What do you mean by "in theory structured path segments allow more >> versatile delegation of authority?" > > URIs are hierarchical. The "/" character is a delimiter. For example, > examine which URLs automatically receive HTTP Basic auth credentials > after an initial 401 response from > > http://example.com/foo/bar/baz > > vs. > > http://example.com/foo/bar;baz > Yes that's true. >> Also, I personally am not sure whether semicolon separated params >> would be prettier or uglier than traditional query params. > > Pretty paths are mapped to ugly query parameters, so there is no need > to dwell on the relative merits. To use the first example from > http://routes.groovie.org/manual.html#route-path > > http://example.com/myapp/feeds/electronics/atom.xml > > maps to > > http://example.com/myapp? > controller=feeds&category=electronics&action=atom&type=xml > > Of course, that is just one concrete example. It turns out that the > commonly used parts of URI syntax are flexible enough to accommodate a > rule-based mini language for routing. The rest of it hasn't been > necessary yet, aside from implementation-specific uses. So, who cares > if Tomcat breaks the semicolon? You can always use something less > broken. > Perhaps I misunderstood you. I agree with your aesthetic sense that paths are prettier than queries in URIs. But I think that both path and queries are needed, so sometimes you will have query parts. The question I was asking is which form of embedding query params in URIs might they be the most pretty and user friendly? http://www.artima.com/articles?o=a&t=java&p=7 Is the traditional way. But: http://www.artima.com/articles;a,tjava,p7 or http://www.artima.com/articles~a,tjava,p7 Could also be used in our architecture. I'm not sure that they are much prettier than the traditional query form, but the latter forms are shorter. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
On 1/11/07, Jon Hanna <jon@...> wrote: > If I'm at http://www.examplenet/a/b/c and go to > http://www.examplenet/a/b/ and receive a 403 or anything else I don't > assume the application is buggy. Why exactly is that "the way it's supposed to be"? One of the key points to the web is that it is so explorable, and by exploring, the user *will* make judgement upon the URL he/she tries out. I always gets a bit annoyed at your scenario above; it means someone has missed opportunities and broken the usability of the web. *Technically* nothing is buggy or broken, but I still regard it as broken or buggy by *design*. Alexander -- Project Wrangler, Information Alchymist, UX, RESTafarian, Topic Maps (job hunting at http://shelter.nu/blog/2006/11/need-unique-talent.html) ------------------------------------------ http://shelter.nu/blog/ -------
On 1/10/07, Mike Schinkel <mikeschinkel@...> wrote: > I'm starting to come to the conclusion that the resources I think are the > same are actually different. Yes, they may have mostly the same content but > they have different breadcrumbs which mean one can't be cached and served > for the other. Just because you navigated to a location circuitously doesn't mean you'd want to see that path reflected in your breadcrumbs. You probably want canonicalized breadcrumbs. You have a back button if you want to retrace your steps; you have breadcrumbs to navigate the site. Hugh
: I'm starting to come to the conclusion that the resources I think are the : same are actually different. Yes, they may have mostly the same content but : they have different breadcrumbs which mean one can't be cached and served : for the other. Of course someone might argue that because the content on the : different pages is so similar I should not create different resources; I : should only have one. But I think that would be being pedantic in trying to : optimize the caching benefit in exchange to the detriment of usability. Separation Of Concerns suggests that your sites (and your life) will be simpler if you decouple the thing you are seeking from the conversation you had along the way. That's not being pedantic either. Walden
: : Is it possible that we have an ingrained view of URI Opacity that is : struggling with its evolvotion towards a more realistic and open-minded : stance? : Anything's possible, but in my experience it's the opposite: the location bar is almost irrelevant, and becoming more and more so. When I want to share a resource with a colleague, I am more apt to tell him the string I googled on, and then refer to the nth entry in the list than I am to cite the URL. That's if we're in speaking range. Other than that, I cut and paste URI into emails and IM messages. And it's been a long long time since I used lipstick to write a URL on a party napkin. Maybe it's just me. Walden
: A agree he should present the information architecture through : hypertext, but it need not be "instead." It can be in addition to : presenting it in the URI. Like it or not, both the hypertext and the : URI are part of the user interface of a web application. Yes, but the constraints on "good hypertext" are not the same as the constraints on "good URI". So when the information architecture changes, where are you? : : > Benefits: : > : > 1. You train your user to use the system at full leverage. : > : What do you mean by "full leverage?" Let the server tell you where it is meaningful to go. Click instead of type. : Yes, that's what I mean by canonical. A canonical form, to which : other URIs that may mean the same thing redirect to. Oh, well in my analysis, "canonical URI" and "canonical URI form" are two radically different ideas. Walden
Hi Walden, On Jan 10, 2007, at 5:45 PM, Walden Mathews wrote: > : A agree he should present the information architecture through > : hypertext, but it need not be "instead." It can be in addition to > : presenting it in the URI. Like it or not, both the hypertext and the > : URI are part of the user interface of a web application. > > Yes, but the constraints on "good hypertext" are not the same > as the constraints on "good URI". So when the information > architecture > changes, where are you? You're right, there it is much more cumbersome to change the URI than the representation, because people and software may have linked to, bookmarked, indexed, etc., that page at that URI. That's one reason it is so important to think hard about what your URIs should be in the first place. But you *can* change them and then redirect the old URIs to the new one. For example, to log into Artima, originally I used a JSP that came with Jive software, and the URL was: http://www.artima.com/login.jsp I stopped using JSP at one point and switched to simply indicating what came back was HTML with a .html extension: http://www.artima.com/login.html Then I decided the name of the page should match the link to it, which was "sign in" not "log in," so I changed it to: http://www.artima.com/signin.html Then I read something, maybe "Cool URIs Don't Change," that convinced me that I shouldn't put the .html on the end. I also read something about search engines that indicated that words separated by underscores would be indexed. So I changed it to: http://www.artima.com/sign_in All the old forms redirect to the latest one. No broken links. No lost Google juice. Changing URIs can be done, it's just not as easy or advisable as changing representations. > : > : > Benefits: > : > > : > 1. You train your user to use the system at full leverage. > : > > : What do you mean by "full leverage?" > > Let the server tell you where it is meaningful to go. Click instead > of type. > It isn't just users typing into the address bar, but users simply looking at the URI to get an idea of where they are in the info architecture. Yes, you can be successful with ugly URIs. Amazon has certainly trained their users to not try and figure out where they are based on the URI. But I still think short, crisp, meaningful URIs make a web app more user-friendly. > : Yes, that's what I mean by canonical. A canonical form, to which > : other URIs that may mean the same thing redirect to. > > Oh, well in my analysis, "canonical URI" and "canonical URI form" > are two radically different ideas. > I'm curious how you define these two differently? Thanks. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Walden Mathews wrote: > : I'm starting to come to the conclusion that the resources I > : think are the same are actually different. Yes, they may > : have mostly the same content but they have different > : breadcrumbs which mean one can't be cached and served > : for the other. Of course someone might argue that because > : the content on the different pages is so similar I should not > : create different resources; I should only have one. But I > : think that would be being pedantic in trying to optimize > : the caching benefit in exchange to the detriment of usability. > > Separation Of Concerns suggests that your sites (and your > life) will be simpler if you decouple the thing you are > seeking from the conversation you had along the way. That's > not being pedantic either. I understand what you are saying, but I believe your point about separation of concerns does not hold for the use case we are discussing. If we were only discussing the path to get there it would be one thing, but we are instead discussing the state of the resource which includes breadcrumbs. Just like the probability of get heads on a coin flip does not change no matter how many times you flip the coin, when someone is at the resource with the specific breadcrumbs the may want to move up and then back down the heirarchy and that has nothing to with with where they've been. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..."
Walden Mathews wrote: > : Is it possible that we have an ingrained view of URI Opacity that is > : struggling with its evolvotion towards a more realistic and > : open-minded stance? > Anything's possible, but in my experience it's the opposite: > the location bar is almost irrelevant, and becoming more and more so. Well that is a statement without a shred of supporting evidence. My supporting evidence is a little stronger [1] [2]. I'm more than open to considering other perspectives, but counter opinions without any supporting evidence seem so troll-like its hard to take them as anything else. > When I want to share a resource with a colleague, I am more > apt to tell him the string I googled on, and then refer to > the nth entry in the list than I am to cite the URL. Hmm. I'd rather give a colleague an exact identifier than a vague notion. After all, I thought that was the whole point of URLs. Similary, when I have friends over who don't know where I live, I give them my exact address rather they tell them to go to "midtown atlanta and drive around looking for a condo"... I have an idea, maybe we should suggest to TimBL that he change RDF to just require a link to a Google search rather than requiring all those peskly little URIs; it would resolve so many pedantic little nuance debates. Whadaya think, he'll go for it? > if we're in speaking range. Other than that, I cut and paste > URI into emails and IM messages. Let's hope they are not too important, because if they are long they'll break and your recipient might not be able to find them (just as I recently couldn't find [3] because email ate the link). But then I was forgetting that you are not "in sales," I guess it's unimportant for you to communicate such things reliably? > And it's been a long long time since I used lipstick to write > a URL on a party napkin. That's a rather strange writing utensil for a male, wouldn't you say? Maybe your choice of lipstick for writing is what is retarding your use of the hand written word? > Maybe it's just me. I'm pretty sure it is. Actually, it seems your main goal is to be contrary; I dunno. Maybe that's just you. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..." P.S. I probably shouldn't reply to emails that seem designed to bait me when I'm really tired, but sometimes I do... [1] http://www.useit.com/alertbox/990321.html [2] http://www.w3.org/2001/tag/doc/metaDataInURI-31#guessing [3] http://www.pacificspirit.com/blog/2005/03/01/wsrest_continued_do_we_need_an_ http_transfer_soap_binding_and_simplified_wsdl
Bill Venners wrote: > That's a good point. Caching is valuable too from a usability > perspective and a business one, because it will make the site > seem faster to users. I can think of a few options in that case. > > One is to let go of the breadcrumbs. Users can use the back > button to back up. On Firefox if I hold down the back button > I get essentially breadcrumbs of page titles, so you could > consider dropping them from the page itself to enable you to > have a canonical URI for each concept. > > But breadcrumbs can be nice too, and if you don't want to > drop them, another thing you could do since you probably only > have a few variations of breadcrumbs is try to have a > canonical URI and send a different ETag for each variation. I > don't know that this would work, because you'd have to figure > out what breadcrumbs to show on that canonical URI based on > which URI they came from. I'm not sure how consistently > browsers will indicate the URI from which they were > redirected in the canonical URI request. And even if that > does work well enough, you don't get as good caching because > proxies can't do content negotiation for you. So the proxies > would have to ask the server each time if the appropriate > representation is one of the ones it has cached. If so, your > server wouldn't have to send the data to the proxy. But > that's the only real speedup, and it isn't as good caching as > having different URIs for each. > > Another thing you could do is only redirect to a canonical > URI for selected search engine robots. Search engines warn > you not to do this. You aren't supposed to do something > different for a search engine robot than for non-search > engine robots. But they are mainly concerned about you having > a page about poker that when a search engine robot shows up, > you switch to a page about the representative of your > district. Canonicalizing URIs would help search engines > provide better results. So in this approach you get good > caching and good search results, but at the risk of getting > penalized for breaking the search engine's guidelines by > being removed entirely from their index. > > Or, you could just not have canonical URIs for each concept > and accept the likelihood of reduced traffic from search > engines. I wouldn't do that in my situation, but all design > decisions depend on context. Maybe in your case being found > via search engines isn't as important as being super > cache-friendly and having breadcrumbs. You have to decide > what tradeoffs to make, and it isn't easy. I have struggled a > lot with balancing these things due to variation on pages > coming from customization based on whether a user is logged > in or not, and if so, who they are. One of the rules of thumb > is to minimize variation as much as possible, and that's one > reason I never show breadcrumbs on pages. Fair points all. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..."
Hugh Winkler wrote: > > I'm starting to come to the conclusion that the resources I > think are > > the same are actually different. Yes, they may have mostly the same > > content but they have different breadcrumbs which mean one can't be > > cached and served for the other. > > Just because you navigated to a location circuitously doesn't > mean you'd want to see that path reflected in your > breadcrumbs. You probably want canonicalized breadcrumbs. You > have a back button if you want to retrace your steps; you > have breadcrumbs to navigate the site. To paraphrase Dan Connolly, I think you and I are on different planets. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to make the web more usable..."
> > Sure - if you want to dynamically define and re-define the > > implementation of the 'start' action, like upload some script. (I > > built a system like that once, it was very cool.) But I think this > > model is headed towards a queue, where you add work (pointers to > > entities) to be done. > > yes. naturally, i would do it by sending a > '<start>http://example.org/openQRM/service/abc123</start>' > data to the http://example.org/openQRM/actions resource (a > queue of actions), or > http://example.org/openQRM/service/abc123/actions. but this > then seems very RPC It's not really RPC since the method is 'add to resource' or 'extend resource' and not a custom method. It does border on a message oriented approach that quickly loses the clarity of defined methods - a generic 'process this' for everything (which I don't like, but that's just me).
> S. Mike Dierken wrote: > >> now, it is hard for me to see 'start' and 'stop' as values of the > >> 'currentactivity' property. if anything, it looks like a > masquerade > >> of the RPC equivalent of posting to the VirtualEnvironment entity > >> with the pair 'action=start'. > > What happens if a client sends 'start' twice? In a state transition > > diagram, that would be fairly easy to describe the desired > behavior. > > In an 'action > > can you please elaborate? > > > oriented' approach, you'd have to special case that to prevent > > multiple initializations/etc. Sorry I wasn't clear. I pictured an implementation of 'start' that blindly allocated resources, started processes, etc. If a 'start' message were sent twice, I wouldn't be surprised if this resource allocation and process startup code tried to run a second time. Although it would be easy to guard against this, if the meaning of the API was "execute that code", it seems odd that the server would refuse to do what it advertised as it's job. With a state transfer, setting the state of a resource to the same value twice is easy to describe and easy to accept multiple redundant messages and honor the externally visible interface.
S. Mike Dierken wrote: > > >> S. Mike Dierken wrote: >>>> now, it is hard for me to see 'start' and 'stop' as values of the >>>> 'currentactivity' property. if anything, it looks like a >> masquerade >>>> of the RPC equivalent of posting to the VirtualEnvironment entity >>>> with the pair 'action=start'. >>> What happens if a client sends 'start' twice? In a state transition >>> diagram, that would be fairly easy to describe the desired >> behavior. >>> In an 'action >> can you please elaborate? >> >>> oriented' approach, you'd have to special case that to prevent >>> multiple initializations/etc. > > Sorry I wasn't clear. > I pictured an implementation of 'start' that blindly allocated resources, > started processes, etc. > If a 'start' message were sent twice, I wouldn't be surprised if this > resource allocation and process startup code tried to run a second time. > Although it would be easy to guard against this, if the meaning of the API > was "execute that code", it seems odd that the server would refuse to do > what it advertised as it's job. > With a state transfer, setting the state of a resource to the same value > twice is easy to describe and easy to accept multiple redundant messages and > honor the externally visible interface. excellent point. how about something slightly different: the service resource can have a 'start schedule' property. then, the client can set this property to a date, or 'now'. when the service is started, this property becomes read-only until the service is stopped (by having a stop schedule property). the status of the service is a read only property, updated by the server. > > > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
> > how about something slightly different: the service resource > can have a 'start schedule' property. then, the client can > set this property to a date, or 'now'. when the service is > started, this property becomes read-only until the service is > stopped (by having a stop schedule property). the status of > the service is a read only property, updated by the server. Sounds reasonable. I'm interested to hear with others have to say though.
Re-reading the thread subject, and no disrespect intended. On 10/01/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > * REST is not only GET. The dissertation talks about a resource as a > mapping to a set of values. This is fine for GET but doesn't provide a > framework under which we can assess the value of other methods. If > instead of talking about a resource as "what you see" we can talk about > a resource is "demarcating state and mapping to representationt types, > defining what you see and what it understands" we can then talk about > what a PUT might do. IMHO this simply isn't 'REST for the rest of us' It's back in thesis land, abstracted to hell and gone. If this is good/valuable, please find some plain English to say it. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
Hello, On 10/01/07, Bill Venners <bv-svp@...> wrote: > On Jan 10, 2007, at 6:27 AM, Mike Schinkel wrote: > > I'm starting to come to the conclusion that the resources I think > > are the > > same are actually different. Yes, they may have mostly the same > > content but > > they have different breadcrumbs which mean one can't be cached and > > served > > for the other. Of course someone might argue that because the > > content on the > > different pages is so similar I should not create different > > resources; I > > should only have one. But I think that would be being pedantic in > > trying to > > optimize the caching benefit in exchange to the detriment of > > usability. > > > That's a good point. Caching is valuable too from a usability > perspective and a business one, because it will make the site seem > faster to users. I can think of a few options in that case. > > One is to let go of the breadcrumbs. Users can use the back button to > back up. On Firefox if I hold down the back button I get essentially > breadcrumbs of page titles, so you could consider dropping them from > the page itself to enable you to have a canonical URI for each concept. You may also keep one URI for a page, display one set of possible breadcrumbs in that page representation, and for any other path the user takes, just replace the breadcrumbs in the DOM via javascript. You keep the cache happy, you keep the bots happy, and the users can get different breadcrumbs via a separate js file refered in the page, that somehow tracks their path. The only thing that shouldn't be cacheable is that js URL (if you generate it over each request), which is anyway significant smaller that the whole page. Cheers, -- Laurian Gridinoc, purl.org/net/laur
Alexander Johannesen wrote: > On 1/11/07, Jon Hanna <jon@...> wrote: >> If I'm at http://www.examplenet/a/b/c and go to >> http://www.examplenet/a/b/ and receive a 403 or anything else I don't >> assume the application is buggy. > > Why exactly is that "the way it's supposed to be"? Nothing beyond the documented constraints is the way it's supposed to be. This isn't, the scenario where it doesn't give a 403 isn't either. > One of the key > points to the web is that it is so explorable, and by exploring, the > user *will* make judgement upon the URL he/she tries out. I always > gets a bit annoyed at your scenario above; it means someone has missed > opportunities and broken the usability of the web. *Technically* > nothing is buggy or broken, but I still regard it as broken or buggy > by *design*. That assumes not only that implementation factors made that feasible, or didn't just make it too hard to work nicely, but that http://www.examplenet/a/b/ could be reasonably represented AND that that representation is one you would want the user to see (not always the case by a long shot). I do like it when http://www.examplenet/a/b/ works. I do try to make http://www.examplenet/a/b/ work when applicable, but it's not always going to. This is on top of the fact that such structuring of URIs meets a different set of criteria to others that have to be met to work well with the web. Using both is the ideal, but arguing that it's a matter of one versus the other is bogus.
Mike Schinkel wrote: > Hugh Winkler wrote: >> Just because you navigated to a location circuitously doesn't >> mean you'd want to see that path reflected in your >> breadcrumbs. You probably want canonicalized breadcrumbs. You >> have a back button if you want to retrace your steps; you >> have breadcrumbs to navigate the site. > > To paraphrase Dan Connolly, I think you and I are on different planets. My browser has a back button, so I'd rather use the "bread crumbs" from his planet than yours.
On Thu, 2007-01-11 at 08:18 +0000, Dave Pawson wrote: > Re-reading the thread subject, and no disrespect intended. > On 10/01/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > > * REST is not only GET. The dissertation talks about a resource as a > > mapping to a set of values. This is fine for GET but doesn't provide > a > > framework under which we can assess the value of other methods. If > > instead of talking about a resource as "what you see" we can talk > about > > a resource is "demarcating state and mapping to representationt > types, > > defining what you see and what it understands" we can then talk > about > > what a PUT might do. > IMHO this simply isn't 'REST for the rest of us' > It's back in thesis land, abstracted to hell and gone. > If this is good/valuable, please find some plain English to say it. Point taken. In plain english: A resource isn't just the representations/documents it returns when you do a GET. The resource means something. It is the service exposing its interface as information rather than methods. A GET takes a copy of the information in a particular document format for consumption by the user. A PUT replaces that information with information provided by the user. POST adds information to the resource, and DELETE PUTs the null state. These methods can enable interaction with any kind of information. REST allows other methods to be defined, but they always act on generic information. Consider the java bean concept. A client has an expectation that get_foo() and set_foo() are operating on the same thing. The thing might be a simple member variable, but might just as easily be a more abstract concept. Either way, the fact that we have a uniform way of accessing the information makes it possible to develop applications we otherwise couldn't. We can do property-based editing with a java-bean. If we used arbitrary methods we couldn't. Standard content types are also required. Sometimes when we modify a resource, it will only be that resource that changes. However, just as often the change will set other processing in motion. When we POST a new purchase order we expect resources that describe the set of open purchase orders to change. This information is accessible from multiple resources, and could conceivably be modified usring one of several resources. We can talk about resource that share information as overlapping. Changes that might not be a simple overlap may include a commission being paid for the puchase order achieving a particular state of completion. Any kind of business logic may be performed by the server so long as it provides a reasonable interpretation of the request to modify a particular resource. Even a GET is likely to have sort of side-effect. A log file is bound to be updated, at least. Benjamin.
Mike Schinkel wrote: > > > Hugh Winkler wrote: > > > I'm starting to come to the conclusion that the resources I > > think are > > > the same are actually different. Yes, they may have mostly the same > > > content but they have different breadcrumbs which mean one can't be > > > cached and served for the other. > > > > Just because you navigated to a location circuitously doesn't > > mean you'd want to see that path reflected in your > > breadcrumbs. You probably want canonicalized breadcrumbs. You > > have a back button if you want to retrace your steps; you > > have breadcrumbs to navigate the site. > > To paraphrase Dan Connolly, I think you and I are on different planets. Actually, that's not unreasonable from a usability/accessibility perspective. There's no need to have your site walk reflected in the breadcrumb, and there is something to be said for 'static' breadcrumbs that indicate the site's IA. This is more important today, as many people are dropped straight into a site via a search engine. cheers Bill
--- In rest-discuss@yahoogroups.com, Benjamin Carlyle <benjamincarlyle@...> wrote: > > ... When we POST a new purchase order we expect resources that > describe the set of open purchase orders to change. This information is > accessible from multiple resources, and could conceivably be modified > usring one of several resources. We can talk about resource that share > information as overlapping. Changes that might not be a simple overlap > may include a commission being paid for the puchase order achieving a > particular state of completion. Any kind of business logic may be > performed by the server so long as it provides a reasonable > interpretation of the request to modify a particular resource. Even a > GET is likely to have sort of side-effect. A log file is bound to be > updated, at least. > > Benjamin. > So here's a question: What about PUT? When you PUT something can other resources change? If so are those changes also required to be idempotent? Does that mean that a resource that is a counter for the number of times another resource was PUT is "illegal"? Also, I assume that side-effects of GET (as well as the non-idempotent side-effects of PUT and DELETE) are only allowed as long as they are not reflected in the resource space in any way? e.g. you should never make the log part of your interface, accessible via GET. Correct? (Note that I'm not implying that if for some reason you provided some HTTP access to your log file for administrators that all is lost. Arguably this is a different interface.)
For those interested, I posted the following on the WHATWG working group blog proposing URI Templates be used for forms for WebForms 2.0 : http://blog.whatwg.org/proposing-uri-templates-for-webforms-20 I posted a longer one here: http://blog.welldesignedurls.org/2007/01/11/proposing-uri-templates-for-webf orms-2/ -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
On 1/10/07, Mike Schinkel <mikeschinkel@...> wrote: > So by "parameter" do you mean attribute? Doh, yes. > For example, would this be > workable where newer browsers would use "template" instead of "action?" Yup. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Laurian Gridinoc wrote: > You may also keep one URI for a page, display one set of > possible breadcrumbs in that page representation, and for any > other path the user takes, just replace the breadcrumbs in > the DOM via javascript. > > You keep the cache happy, you keep the bots happy, and the > users can get different breadcrumbs via a separate js file > refered in the page, that somehow tracks their path. > > The only thing that shouldn't be cacheable is that js URL (if > you generate it over each request), which is anyway > significant smaller that the whole page. Thanks. Since writing this I had thought of that as providing a better solution, but still not best. I believe that URL is part of the user interface so it doesn't address that part of the concern, unfortunately. But, I'll continue to research. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
Jon Hanna wrote: > Mike Schinkel wrote: > > Hugh Winkler wrote: > >> Just because you navigated to a location circuitously doesn't mean > >> you'd want to see that path reflected in your breadcrumbs. You > >> probably want canonicalized breadcrumbs. You have a back button if > >> you want to retrace your steps; you have breadcrumbs to > >> navigate the site. > > To paraphrase Dan Connolly, I think you and I are on > different planets. > My browser has a back button, so I'd rather use the "bread > crumbs" from his planet than yours. And I'm happy for you to do so. The different, however, is I'm not trying to keep you or Hugh from doing it your ways. Bill de hOra wrote: > Actually, that's not unreasonable from a > usability/accessibility perspective. There's no need to have > your site walk reflected in the breadcrumb, I respectfully state that that is a matter of opinion. Like Jakob Neilsen, I believe that URL is UI[1] and if the URL doesn't reflect the site structure then they are not "hackable" in the same way and hence not as good of UI. Everyone tends to optimize for those things they value, and I think you and a few others here don't really value some things that are important to me, and vice versa. The problem comes when authoritative findings dictate one set of values at the expense of others instead of finding a way to service everyone's values. > there is something to be said for 'static' breadcrumbs > that indicate the site's IA. This is more important today, > as many people are dropped straight into a site via a > search engine. Site's IA? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to improve the web..." [1] http://www.useit.com/alertbox/990321.html
Mike Schinkel wrote: > Bill de hOra wrote: > > Actually, that's not unreasonable from a > > usability/accessibility perspective. There's no need to have > > your site walk reflected in the breadcrumb, > > I respectfully state that that is a matter of opinion. Like Jakob Neilsen, I > believe that URL is UI[1] and if the URL doesn't reflect the site structure > then they are not "hackable" in the same way and hence not as good of UI. I don't disagree with either of those statements, but I don't understand what your point is either. Countering what I think about good UI design for breadcrumbs with what you think is good UI design for URLs doesn't make sense (to me) since breadcrumbs aren't URIs and don't necessarily depend on the URIs construction. Anyway, the point I'm trying to make is this - there's no reason to dynamically generate the breadcrumb trail based on on the user's walk through the site. The follow on point is that URL design need have nothing to do with page breadcrumbs. Finally, as an aside, it's easier to arrange caching for pages with 'static' breadcrumbs. Everybody wins. > > there is something to be said for 'static' breadcrumbs > > that indicate the site's IA. This is more important today, > > as many people are dropped straight into a site via a > > search engine. > > Site's IA? Information Architecture. cheers Bill
Mike Schinkel wrote: > I respectfully state that that is a matter of opinion. Like Jakob Neilsen, I A funny person to cite. Doesn't he complain about breadcrumbs that aren't showing a way into a hierarchical structure? Still, it's his fault, since apparently he gave them that name, and the name does imply that it's showing a path that was taken. > believe that URL is UI[1] and if the URL doesn't reflect the site structure > then they are not "hackable" in the same way and hence not as good of UI. This doesn't mean they have to be broken. It's actually easier to do both at the same time, IME.
Hi Laurian, On Jan 11, 2007, at 2:18 AM, Laurian Gridinoc wrote: > You may also keep one URI for a page, display one set of possible > breadcrumbs in that page representation, and for any other path the > user takes, just replace the breadcrumbs in the DOM via javascript. > > You keep the cache happy, you keep the bots happy, and the users can > get different breadcrumbs via a separate js file refered in the page, > that somehow tracks their path. > > The only thing that shouldn't be cacheable is that js URL (if you > generate it over each request), which is anyway significant smaller > that the whole page. > I'm not sure this would work. I tried figuring out how to do give users customized versions of pages using JavaScript, as that approach was suggested to me by some folks on this list. I don't want to do it that way for several reasons, but I also couldn't see how it would be possible. The trouble is: how does a page know which alternate JavaScript URL to grab if the page is always identical? If the page isn't identical, but only differs in that one URL, then you can't cache it at one URL without resorting to multiple entities. And if you're going to do that, you may as well just put the breadcrumbs on the page on the server and have each of those versions be entities. One thing JS could do differently is look in the cookies, but am I wrong in assuming a cache intermediary would cache all the response headers too, and send them along (other than perhaps changing a few to indicate the response came from a cache.) If so, then you can't change the cookies either without resorting to multiple entities. Again, if you're going to do that, why not just have the server put the breadcrumbs in and call those multiple entities. If you force them to log in and they get a session ID cookie, then that could be accessed by the JS. (But the breadcrumbed page itself wouldn't set that cookie in its response.) But then you just have one breadcrumb per user session per URI. And what if they have two browsers windows open. This has the same problem using session to hold state always has. Am I missing some way to do this, or am I correct in concluding it isn't possible? Thanks. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
----- Original Message ----- From: "Bill Venners" <bv-svp@...> To: "Walden Mathews" <waldenm@...> Cc: "'REST Discuss'" <rest-discuss@yahoogroups.com> Sent: Wednesday, January 10, 2007 9:20 PM Subject: Re: [rest-discuss] Benefits of Canonical URLs (was Request for feedback: REST for the Rest of Us) : Hi Walden, : : On Jan 10, 2007, at 5:45 PM, Walden Mathews wrote: : : > : A agree he should present the information architecture through : > : hypertext, but it need not be "instead." It can be in addition to : > : presenting it in the URI. Like it or not, both the hypertext and the : > : URI are part of the user interface of a web application. : > : > Yes, but the constraints on "good hypertext" are not the same : > as the constraints on "good URI". So when the information : > architecture : > changes, where are you? : : You're right, there it is much more cumbersome to change the URI than : the representation, because people and software may have linked to, : bookmarked, indexed, etc., that page at that URI. That's one reason : it is so important to think hard about what your URIs should be in : the first place. But you *can* change them and then redirect the old : URIs to the new one. : : For example, to log into Artima, originally I used a JSP that came : with Jive software, and the URL was: : : http://www.artima.com/login.jsp : : I stopped using JSP at one point and switched to simply indicating : what came back was HTML with a .html extension: : : http://www.artima.com/login.html : : Then I decided the name of the page should match the link to it, : which was "sign in" not "log in," so I changed it to: : : http://www.artima.com/signin.html : : Then I read something, maybe "Cool URIs Don't Change," that convinced : me that I shouldn't put the .html on the end. I also read something : about search engines that indicated that words separated by : underscores would be indexed. So I changed it to: : : http://www.artima.com/sign_in : : All the old forms redirect to the latest one. No broken links. No : lost Google juice. Changing URIs can be done, it's just not as easy : or advisable as changing representations. Well, the above is a story of the kind of obsession I think we ought not fall into. It comes from valuing the wrong things, IMO. Sorry. : It isn't just users typing into the address bar, but users simply : looking at the URI to get an idea of where they are in the info : architecture. We're circling round and round. "Where we are in the info architecture" is not a primary concern. A primary concern is where to need to be next and how to get there. Good hypertext is the way. But stick to tea leaves if you like. I'm tired. Yes, you can be successful with ugly URIs. Amazon has : certainly trained their users to not try and figure out where they : are based on the URI. But I still think short, crisp, meaningful URIs : make a web app more user-friendly. Ugly or beautiful, they are just tokens in a system that is rich with legitimate description, if you do it right. : > Oh, well in my analysis, "canonical URI" and "canonical URI form" : > are two radically different ideas. : > : I'm curious how you define these two differently? I haven't seen these terms laid out anywhere. So I'm just applying the rules of English grammar. The former simply says the one URI is the preferred and all other equivalents are aliases. The latter is a matter of specifying the arrangement of meaningful fragments within the URI string so it can be understood. I think the first is good practice, while the second is marginally useful, but tends to compete with better solutions. To summarize. Walden
: I understand what you are saying, but I believe your point about separation : of concerns does not hold for the use case we are discussing. If we were : only discussing the path to get there it would be one thing, but we are : instead discussing the state of the resource which includes breadcrumbs. : Just like the probability of get heads on a coin flip does not change no : matter how many times you flip the coin, when someone is at the resource : with the specific breadcrumbs the may want to move up and then back down the : heirarchy and that has nothing to with with where they've been. I doubt the resource includes breadcrumbs (whatever they are -- I thought I knew but now I know I don't). Perhaps some representations do. If two representations of the same resource could differ in the breadcrumb dept., and that's somehow not about navigation to that resource, then could you please give a short example so I can know what you're talking about? Walden
Did anybody mention this article in relation to this issue? http://duncan-cragg.org/blog/post/business-functions-rest-dialogues/ What do people think of it? There's also a critique at: http://www.addsimplicity.com/adding_simplicity_an_engi/2007/01/a_real_ebay_arc.html One of his main counter-arguments seems to be CompleteSale, which I thought Duncan Cragg could have handled more explicitly, as he did with ResponseToBestOffer.
Hi Mike, That would be so nice to see such a proposal accepted for HTML 5.0, in addition to adding support for PUT and DELETE actions. Even though the URI template RFC is not finalized yet, we already have a complete support for it, on the server-side, in the Restlet framework. We happily use them for our URI-based routing and I think they add a lot of expressiveness while keeping a simple syntax. Usage example: http://www.restlet.org/tutorial#part11 They are also supported in WADL, the RESTful description language, and in the OpenSearch specification. Extending their usage to HTML forms sounds like a logical and useful step. Regards, Jerome Louvel http://blog.noelios.com Mike Schinkel wrote : > For those interested, I posted the following on the WHATWG working group > blog proposing URI Templates be used for forms for WebForms 2.0 : > > http://blog.whatwg.org/proposing-uri-templates-for-webforms-20 > > I posted a longer one here: > > http://blog.welldesignedurls.org/2007/01/11/proposing-uri-templates-for-webf > orms-2/
this article is what prompted me to submit this thread in the first place. if not, i would have probably gone the "ebay" way of defining actions in the url. i also read http://addsimplicity.typepad.com/adding_simplicity_an_engi/2006/11/the_rest_dialog.html in which the real architect continues the dialog from his point of view. what i don't like about duncan's post is that he addresses only getter and setter business functions, not real ones, those that actually create a process. it is very easy to say that instead of getFoo, you can GET http://example.com/foo. it is harder when you want to model a doSomething function. Bob Haugen wrote: > > > Did anybody mention this article in relation to this issue? > http://duncan- cragg.org/ blog/post/ business- functions- > rest-dialogues/ > <http://duncan-cragg.org/blog/post/business-functions-rest-dialogues/> > > What do people think of it? > > There's also a critique at: > http://www.addsimpl icity.com/ adding_simplicit y_an_engi/ 2007/01/a_ > real_ebay_ arc.html > <http://www.addsimplicity.com/adding_simplicity_an_engi/2007/01/a_real_ebay_arc.html> > > One of his main counter-arguments seems to be CompleteSale, which I > thought Duncan Cragg could have handled more explicitly, as he did > with ResponseToBestOffer . > > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
On Fri, 2007-01-12 at 15:23 +0200, Ittay Dror wrote: > this article is what prompted me to submit this thread in the first > place. if not, i would have probably gone the "ebay" way of defining > actions in the url. i also read > http://addsimplicity.typepad.com/adding_simplicity_an_engi/2006/11/the_rest_dialog.html in which the real architect continues the dialog from his point of view. > > what i don't like about duncan's post is that he addresses only getter > and setter business functions, not real ones, those that actually > create a process. it is very easy to say that instead of getFoo, you > can GET http://example.com/foo. it is harder when you want to model a > doSomething function. I think this is exactly the point, though. HTTP's REST replaces the doSomething concept with a "make something so" concept. If you think about it, all doSomething can be modelled this way. Instead of "make this kind of state transition", just "make your state this". You have to get into specific examples to see how this works. The first example of starting or stopping a function is the same as setting a running resource to false or true. Do you have more examples in mind? What kind of process would you like to create? The general approach to process creation would either be: POST a representation of the process to a factory resource to create it. PUT a representation of the process to the process resource to change its operation. DELETE the process to destroy it. PUT a false to the process's "running" resource to suspend it. etc ... Benjamin.
On 12/01/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > I think this is exactly the point, though. HTTP's REST replaces the > doSomething concept with a "make something so" concept. If you think > about it, all doSomething can be modelled this way. Instead of "make > this kind of state transition", just "make your state this". You have to > get into specific examples to see how this works. The first example of > starting or stopping a function is the same as setting a running > resource to false or true. Great description for the 'rest for the rest of us'. Thanks Benjamin. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
Benjamin Carlyle wrote: > On Fri, 2007-01-12 at 15:23 +0200, Ittay Dror wrote: >> this article is what prompted me to submit this thread in the first >> place. if not, i would have probably gone the "ebay" way of defining >> actions in the url. i also read >> http://addsimplicity.typepad.com/adding_simplicity_an_engi/2006/11/the_rest_dialog.html in which the real architect continues the dialog from his point of view. >> >> what i don't like about duncan's post is that he addresses only getter >> and setter business functions, not real ones, those that actually >> create a process. it is very easy to say that instead of getFoo, you >> can GET http://example.com/foo. it is harder when you want to model a >> doSomething function. > > I think this is exactly the point, though. HTTP's REST replaces the > doSomething concept with a "make something so" concept. If you think > about it, all doSomething can be modelled this way. Instead of "make > this kind of state transition", just "make your state this". You have to > get into specific examples to see how this works. The first example of > starting or stopping a function is the same as setting a running > resource to false or true. Do you have more examples in mind? What kind > of process would you like to create? what about passing parameters, like 'start at date'? or 'start at date with X resources'? the first can be modeled by setting 'date' to a 'schedule' property of the service. but how can the second? also, many suggestions, while very good in themselves mean that the rest API is no longer just a gateway to the system's functionality. it enforces the system to be built in a certain way (like creating 'process' resources that can be monitored and canceled). while rest is an excellent approach to API, systems are usually designed by object-oriented, procedural or functional approaches. what i would love to see is a separation of API and model, in the same sense as MVC. i want to construct my system with whatever approach is suitable for me and then be able to provide a restful API to it, without having it break my system model. > > The general approach to process creation would either be: > POST a representation of the process to a factory resource to create it. > PUT a representation of the process to the process resource to change > its operation. > DELETE the process to destroy it. > PUT a false to the process's "running" resource to suspend it. > etc ... > > Benjamin. > > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
On Fri, 2007-01-12 at 17:05 +0200, Ittay Dror wrote: > Benjamin Carlyle wrote: > > On Fri, 2007-01-12 at 15:23 +0200, Ittay Dror wrote: > >> what i don't like about duncan's post is that he addresses only > getter > >> and setter business functions, not real ones, those that actually > >> create a process. it is very easy to say that instead of getFoo, > you > >> can GET http://example.com/foo. it is harder when you want to model > a > >> doSomething function. > > I think this is exactly the point, though. HTTP's REST replaces the > > doSomething concept with a "make something so" concept. If you think > > about it, all doSomething can be modelled this way. Instead of "make > > this kind of state transition", just "make your state this". You > have to > > get into specific examples to see how this works. > what about passing parameters, like 'start at date'? or 'start at date > with X resources'? the first can be modeled by setting 'date' to a > 'schedule' property of the service. but how can the second? The "with X resources" sounds like it is part of your function definition. In that case, it should be part of what you POST into your schedule resource: POST http://example.com/functionSchedule HTTP/1.1 Content-Type: application/calendar+xml <vevent> <priority>5</priority> <dtstart>2007-01-13T08:55:00Z</dtstart> <dtend>2007-01-13T09:55:00Z</dtend> <content type="application/my-function-definition"> <function> <resource>resource1</resource> <resource>resource2</resource> <resource>resource3</resource> </function> </content> </vevent> However, the exact design would no doubt rely on factors that are not easy to communicate via email. > also, many suggestions, while very good in themselves mean that the > rest API is no longer just a gateway to the system's functionality. it > enforces the system to be built in a certain way (like creating > 'process' resources that can be monitored and canceled). while rest is > an excellent approach to API, systems are usually designed by > object-oriented, procedural or functional approaches. You won't be able to write a "REST Proxy" for your system that automatically translates REST requests to internal method invocations. The REST request and the internal method are based on different conceptual models. You will either need to write glue code that maps these conceptual models to each other, or push the REST approach further back into your applications. In the glue code model the resources introduced as part of the REST Proxy would be created and maintained with the glue as handles to internal state. You would just have to be careful about the API and internal state models becoming inconsistent. I'm an advocate of pushing REST futher back myself, although this is most effective when the system behind the curtain is a distributed object environment that will benefit from the application of REST principles. > what i would love to see is a separation of API and model, in the same > sense as MVC. i want to construct my system with whatever approach is > suitable for me and then be able to provide a restful API to it, > without having it break my system model. So long as you are prepared to write the glue code, there is nothing to stop this from happening. You can encapsulate the non-RESTful system in a RESTful container. Just don't expect it to be a 1:1 mapping. If the inside is not RESTful and the outside is RESTful, it will be n:m. Benjamin.
Benjamin Carlyle wrote: > I'm an advocate of pushing REST futher back myself, although this is > most effective when the system behind the curtain is a distributed > object environment that will benefit from the application of REST > principles. > > > what i would love to see is a separation of API and model, in the same > > sense as MVC. i want to construct my system with whatever approach is > > suitable for me and then be able to provide a restful API to it, > > without having it break my system model. > > So long as you are prepared to write the glue code, there is nothing to > stop this from happening. You can encapsulate the non-RESTful system in > a RESTful container. Just don't expect it to be a 1:1 mapping. If the > inside is not RESTful and the outside is RESTful, it will be n:m. > > Benjamin. > This feels like over-engineering. It's not because something can be mapped to concept X that it should be. Otherwise, why would we ever have left the realm of Turing machines. Having worked for many years in designing systems for developers, I find it incredibly important to connect intuition to implementation. Mapping everything into POST-generic-stuff-here isn't useful for humans, because it doesn't partition the space we need to reason in. Without partitions, we can't apply the conquer-and-divide reasoning scheme (a.k.a. separation of concerns). Modeling certain operations as RPC has its place, imho. That's why I defined the Behavior pattern in 'REST for the rest of us' [1] (although, I'm thinking about renaming it to something less abstract). Like anything else, it should be used only when required, but when it is required, it shouldn't be shunned. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org [1] http://doc.opengarden.org/index.php?title=Articles/REST_for_the_Rest_of_Us&bc=2
Don Box says that "For what it's worth, I happen to agree, as do many folks I talk to in the big house." http://pluralsight.com/blogs/dbox/archive/2007/01/12/45674.aspx Maybe we will see better REST support from Microsoft. Things I would like: - REST Framework for .NET (rather than having to code everything from the HTTP pipeline up). - REST Module for IIS (don't try to re-use the WebDAV module). - More guides, articles and case-studies covering RESTful architectures on MSDN. - Dedicated REST evangelists (just as there are some dedicated to WS- *). - Acceptance of REST, as opposed to 'adopt and extend' (Don's hi- REST / lo-REST article a while back gives me cause for concern here). - RESTful improvements to IE8 (quite a long list, starting with <form> support for PUT and DELETE, including a better Basic|Digest Authentication story, etc). - RDF / RDFS / SPARQL support in the .NET Framework (I am firmly of the opinion that REST works best when leveraging semantic technologies). I would be bowled over if N3 was also supported. - RelaxNG schema support (for the same reason as the previous point - XmlSchema / XPath are too brittle in the face of REST). - I guess that part of the community would like support for Atom (Pub) as well, although I'm not personally that fussed. What else would the group add to this list? Regards, Alan Dean
Hello everyone. I'm normally a lurker but I have a quetion for the group. Let me paint the scenario for you: I have a Resource. If the User is anonymous, when they GET the Resource, the Representation returned displays one set of information. However, if the User if of the type Admin, the Representation returned displays the same information, but with additional information relevant to the Admin User. My question is this: should the decision to display this different response data rest with the Resource, or the Representation: that is, should the Resource always return ALL the information, regardless of the User type, and let the Representation determine this information to be returned based on User type, or should the data the Resource returns be limited based on User type? The latter would, in effect create two Resources, whereas the first would call the same Resource, and filter the response based on rules defined in the Representation. I feel that the Resource should return all the information and that it is the responsibility of the Representation to determine what is displayed. What do you guys think? Cheers! Ben
On Sat, 2007-01-13 at 02:58 +0000, Steve G. Bjorg wrote: > Benjamin Carlyle wrote: > > I'm an advocate of pushing REST futher back myself, although this is > > most effective when the system behind the curtain is a distributed > > object environment that will benefit from the application of REST > > principles. > > > what i would love to see is a separation of API and model, in the > same > > > sense as MVC. i want to construct my system with whatever approach > is > > > suitable for me and then be able to provide a restful API to it, > > > without having it break my system model. > > So long as you are prepared to write the glue code, there is nothing > to > > stop this from happening. You can encapsulate the non-RESTful system > in > > a RESTful container. Just don't expect it to be a 1:1 mapping. If > the > > inside is not RESTful and the outside is RESTful, it will be n:m. > This feels like over-engineering. It's not because something can be > mapped to concept X that it should be. Otherwise, why would we ever > have left the realm of Turing machines. ... > Modeling certain operations as RPC has its place, imho. That's why I > defined the Behavior pattern in 'REST for the rest of us' [1] > (although, I'm thinking about renaming it to something less abstract). > Like anything else, it should be used only when required, but when it > is required, it shouldn't be shunned. I would be inclined to call this an antipattern. I think it should be avoided. Mapping arbitrary method invocation into POST doesn't help web browsers or other generic network components interact with the resource. It is more of a back-door for when you don't really want to do REST for whatever reason... and there may well be reasons. However, I would be inclined to look to adding special request methods rather than hiding them in the body of a POST request or in a url. If you are using RPC instead of REST style, you might as well put your method where the protocol says it should go. Do you have any examples of this pattern being applied that do not cleanly map into standard HTTP methods? Benjamin.
On Sat, 2007-01-13 at 10:30 +0000, omarshariffdontlikeit wrote: > I have a Resource. If the User is anonymous, when they GET the > Resource, the Representation returned displays one set of information. > However, if the User if of the type Admin, the Representation returned > displays the same information, but with additional information > relevant to the Admin User. > My question is this: should the decision to display this different > response data rest with the Resource, or the Representation: that is, > should the Resource always return ALL the information, regardless of > the User type, and let the Representation determine this information > to be returned based on User type, or should the data the Resource > returns be limited based on User type? The latter would, in effect > create two Resources, whereas the first would call the same Resource, > and filter the response based on rules defined in the Representation. I would be inclined to say that these different return values are coming about because what you really have are two different resources. Perhaps a more specific statement of your use case would help clarify things. I think when you talk about filtering or operations on the representation you are referring to what happens within the program that is maintaining the resource. If so, this is an internal matter. Only what the user sees is important in the REST architecture. The design of your program is a separate matter: Should you have two objects for the two resources, or should you have one object and another that provides filtering on the representation that object produces? Answer: It depends :) REST permits intermediataries to be introduced into an architecture, and this layering doesn't stop at the edge of your process. The protocol handling code is an intermediatary. You might have a pipeline that contains authentication modules and other transformations. In this case it would seem that security is an issue. If so, you would want to look at whether tagging information as admin-only and filtering it out is safer than constructing separate layering chains for the admin-only and generic content. It's hard to say without looking at your code, although it might be easier to secure the admin resources if they had separate urls to the non-admin resources. A rights-checking layer could match all of the .*/admin pages and disallow access to unauthorised users. Another way to approach the problem is to say that the normal page is one resource, but the additional information is another resource reserved for administrators. For example, you might have a div down the bottom of the page where admin information is to be dynamically inserted dependent on rights being granted. You could use a javascript capability like HInclude to conditionally incorporate the admin information into the main content. This would have caching as well as other benefits. The pure anonymous page could be served by a completely different application as the admin-only page if you so desired. Benjamin.
Anne van Kesteren wrote: > On Fri, 12 Jan 2007 12:22:25 +0100, Jerome Louvel > <contact@...> > wrote: > > That would be so nice to see such a proposal accepted for > HTML 5.0, in > > addition to adding support for PUT and DELETE actions. > > FYI, Web Forms 2 which in due course will be part of the > HTML5 proposal already includes support for PUT and DELETE. Any thoughts on the URI Templates? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org/ "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
"omar", you answered your own question: "I have a Resource." That means in your view, it is the same thing regardless of who is looking toward it. Servers vary representations based on the characteristics of the client. Please remember that Resource is an abstraction. Resources don't "return" anything: your implementation does. Walden ----- Original Message ----- From: "omarshariffdontlikeit" <omarshariffdontlikeit@...> To: <rest-discuss@yahoogroups.com> Sent: Saturday, January 13, 2007 5:30 AM Subject: [rest-discuss] Access permissions and Resources : Hello everyone. I'm normally a lurker but I have a quetion for the group. : : Let me paint the scenario for you: : : I have a Resource. If the User is anonymous, when they GET the : Resource, the Representation returned displays one set of information. : However, if the User if of the type Admin, the Representation returned : displays the same information, but with additional information : relevant to the Admin User. : : My question is this: should the decision to display this different : response data rest with the Resource, or the Representation: that is, : should the Resource always return ALL the information, regardless of : the User type, and let the Representation determine this information : to be returned based on User type, or should the data the Resource : returns be limited based on User type? The latter would, in effect : create two Resources, whereas the first would call the same Resource, : and filter the response based on rules defined in the Representation. : : I feel that the Resource should return all the information and that it : is the responsibility of the Representation to determine what is : displayed. What do you guys think? : : Cheers! : : Ben : : : : : __________ NOD32 1975 (20070113) Information __________ : : This message was checked by NOD32 antivirus system. : http://www.eset.com : :
Benjamin Carlyle wrote: > Do you have any examples of this pattern being applied that do not > cleanly map into standard HTTP methods? I do and would love to learn about the design alternatives. I have a Host service that is responsible for managing all services/resources in the Dream environment. There are two kinds of interactions I find problematic to map. The first one is blueprint registration. A blueprint is an XML document that tells the host what assembly/library to use and what service definitions it contains. Registering a blueprint is done by sending a blueprint to: POST /host/register The second operation is instantiating a service. In order to instantiate a service, a configuration XML document must be sent to the Host. This is achieved by doing: POST /host/start In both cases, the document sent has "application/xml" as mime type. If another type gets sent, it is converted to "application/xml" if possible (e.g. application/json, text/javascript, text/php, and other supported XML mappings). So, what would be the alternative? - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Alan Dean wrote: > Don Box says that "For what it's worth, I happen to agree, as do many > folks I talk to in the big house." Don is reactive, not proactive. REST was shown over-and-over to the Indigo team (now Windows Communication Foundation), but it was ignored. Why? I don't know, but I'm assuming it was political. Henrik eventually left the team in disgust. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
On Sat, 2007-01-13 at 18:53 +0000, Steve G. Bjorg wrote: > I have a Host service that is responsible for managing all > services/resources in the Dream environment. There are two kinds of > interactions I find problematic to map. > > The first one is blueprint registration. A blueprint is an XML > document that tells the host what assembly/library to use and what > service definitions it contains. Registering a blueprint is done by > sending a blueprint to: > POST /host/register > > The second operation is instantiating a service. In order to > instantiate a service, a configuration XML document must be sent to > the Host. This is achieved by doing: > POST /host/start > > In both cases, the document sent has "application/xml" as mime type. > If another type gets sent, it is converted to "application/xml" if > possible (e.g. application/json, text/javascript, text/php, and other > supported XML mappings). I think that both of these cases sound OK on the surface, though may do with some fine tuning. I think when you talk about your behaviour pattern there can be exteremes in what you do. One would be to put a method invocation into the content. Another would be to lean towards the direction you seem to be going in these two cases. In fact, I would cast them as two different patterns. Let's consider the blueprint case. You want the server to store a blueprint, so you give your blueprint to the register resource. This actually sounds more like your container pattern. You POST the blueprint to the factory resource. The return verb is "Created", with a location header that points you to the created resource. You could update the blueprint with a PUT, and deregister the blueprint by DELETE-ing the created resource. Your server could be replaced by a range of possible blueprint-storage systems, and the client would continue to work. It would just issue its POST to whatever it was configured with. Instantiation also sounds like your container pattern to me. This time you are posting the state of a new service (which includes configuration) to a service factory. It would create a resource to represent the ongoing state of the service. A DELETE would destroy the service, a PUT would change its state. I suggest that you give some additional thought to: * The naming of factory resources. They both mean "create state when POSTed", and so does every other resource in the architecture. I would just call them something like "/blueprints" and "/services". * Your content types. You are currently indicating the file format, but not the kind of information you are transferring in your document. File format is important when multiple encodings of an abstract model are permitted, but identifying the kind of information is more important. I would be thinking along the lines of "application/something-blueprint", "application/something-service". Though, you may need to have a different content type for each kind of configuation you have for the latter to really make sense. * What happens to the state you transferrred afteer you transfer it. Is it accessible via a new resource (Created)? Does it get mapped into the factory resource (OK)? Or, is it swallowed by the factory resource leaving no visible handles to work with (No Content)? All three are normally legal, and clients should be written to deal with any one of these being returned. Benjamin
Benjamin Carlyle wrote: > On Fri, 2007-01-12 at 15:23 +0200, Ittay Dror wrote: >> this article is what prompted me to submit this thread in the first >> place. if not, i would have probably gone the "ebay" way of defining >> actions in the url. i also read >> http://addsimplicity.typepad.com/adding_simplicity_an_engi/2006/11/the_rest_dialog.html in which the real architect continues the dialog from his point of view. >> >> what i don't like about duncan's post is that he addresses only getter >> and setter business functions, not real ones, those that actually >> create a process. it is very easy to say that instead of getFoo, you >> can GET http://example.com/foo. it is harder when you want to model a >> doSomething function. > > I think this is exactly the point, though. HTTP's REST replaces the > doSomething concept with a "make something so" concept. If you think > about it, all doSomething can be modelled this way. Instead of "make > this kind of state transition", just "make your state this". You have to not every function can be modeled around state transitions. > get into specific examples to see how this works. The first example of > starting or stopping a function is the same as setting a running > resource to false or true. Do you have more examples in mind? What kind > of process would you like to create? > > The general approach to process creation would either be: > POST a representation of the process to a factory resource to create it. > PUT a representation of the process to the process resource to change > its operation. > DELETE the process to destroy it. > PUT a false to the process's "running" resource to suspend it. > etc ... ok, here is my current understanding: when you want to model a business process as REST, assuming it is an actual process, that creates information, not just a tool to manipulate one, then: - if the BP works on a single entity, with no other parameters, then maybe it can be modeled as changing a property of that entity: - scheduleServiceStart(Service, Date) -> PUT date into http://example.com/service/foo/schedule - if the BP has multiple arguments, either break to several BP: - scheduleService(Service, Action, Date) -> (for each action) PUT date into http://example.com/service/foo/action-schedule - otherwise, create a ProcessManager resource and POST to it the details of the BP. maybe partition BP and create a ProcessManager for each group (scheduler for schedules, others for other actions) is this about right? if so, then isn't the ProcessManager a wrapper around RPC? the above process does mean that many functions inside a server can be turned to non-RPC REST API. > > Benjamin. > > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
On Sun, 2007-01-14 at 07:00 +0200, Ittay Dror wrote: > Benjamin Carlyle wrote: > > On Fri, 2007-01-12 at 15:23 +0200, Ittay Dror wrote: > >> this article is what prompted me to submit this thread in the first > >> place. if not, i would have probably gone the "ebay" way of > defining > >> actions in the url. i also read > http://addsimplicity.typepad.com/adding_simplicity_an_engi/2006/11/the_rest_dialog.html in which the real architect continues the dialog from his point of view. > >> what i don't like about duncan's post is that he addresses only > getter > >> and setter business functions, not real ones, those that actually > >> create a process. it is very easy to say that instead of getFoo, > you > >> can GET http://example.com/foo. it is harder when you want to model > a > >> doSomething function. > > I think this is exactly the point, though. HTTP's REST replaces the > > doSomething concept with a "make something so" concept. If you think > > about it, all doSomething can be modelled this way. Instead of "make > > this kind of state transition", just "make your state this". > not every function can be modeled around state transitions. > ok, here is my current understanding: > when you want to model a business process as REST, assuming it is an > actual process, that creates information, not just a tool to > manipulate one, then: > - if the BP works on a single entity, with no other parameters, then > maybe it can be modeled as changing a property of that entity: > - scheduleServiceStart(Service, Date) -> PUT date into > http://example.com/service/foo/schedule > - if the BP has multiple arguments, either break to several BP: > - scheduleService(Service, Action, Date) -> (for each action) PUT date > into http://example.com/service/foo/action-schedule > - otherwise, create a ProcessManager resource and POST to it the > details of the BP. maybe partition BP and create a ProcessManager for > each group (scheduler for schedules, others for other actions) I'm not sure I understand your base assumption here. REST is just a tool to access and manipulate a business process. However in order to manipulate it, you have to model the process as a set of resources. This is the same as if you were in a more general RPC environment and had to model the process as more general objects. The goal in using resources with their uniform interface is to invoke properties of REST, like having your clients work with a thousand other services... controlling the evolution of your network interface... that sort of thing. If your system is of sufficient scale, you might even be able to see improvements in overall system performance when you apply REST principles. The specific design a system should take will depend on factors that are difficult to communicate via email. However a few options seem possible in your example: POST http://example.com/services/fooservice/runschedule application/the-schedule-document-type <vevent> <dtstart>some time</dtstart> <dtend>later time</dtend> </vevent> Note that PUT is designed to replace the information at a particular resource, while POST is designed to add to it. We use POST here to add schedule entries without destroying earlier entries. We don't need to specify anything other than start and end time, because the rest is handled by the context of the URL we are POSTing to. The fooservice will run and stop at the scheduled times. You could repeat this pattern for other services, or maintain a global/group schedule: POST http://example.com/services/runschedule application/the-schedule-document-type <vevent> <dtstart>some time</dtstart> <dtend>later time</dtend> <url>http://example.com/services/fooservice</url> </vevent> This version would need to contain the extra information about what actually needed to run. Now when you get to action, you look like you are moving outside of the pure REST envelope again. However you could model HTTP actions like this: POST http://example.com/actionschedule application/the-schedule-document-type <vevent> <dtstart>some time</dtstart> <action> <method>PUT</method> <url>http://example.com/services/fooservice</url> <document type="application/the-document-type"> <state>running</state> <configuration>....</configuration> ... </document> </action> </vevent> > if so, then isn't the ProcessManager a wrapper around RPC? the above > process does mean that many functions inside a server can be turned to > non-RPC REST API. In REST we typically expose the interface to our services as information rather than arbitrary method invocations with names hidden in urls or content. It can be a pretty serious rethink to consider what information you are providing to users for access and update, rather than what methods the users may want to invoke. There are often mismatches that make it difficult to do this as an API, but momentum that makes it just as difficult to change things internally to match the REST approach. Somtimes the best you can do is apply the best principles to new softare and see how it goes for a while before even thinking about a retrofit. In the end you have to consider whether REST is valuable enough in your environment to offset the cost. In small well-controlled environments it often won't be. If the scale of your operation is larger it may start to pay for itself in reduced coordination costs internally. Benjamin.
I'm looking for some feedback on URI patterns. I just recently found
out about URI templates [1] and have been scratching my head on how
they could be extended to also provide URI patterns.
For example:
http://server/{first}/{last}
This URI could be used to produce or consume a URI such as:
http://server/john/doe
However, the template syntax doesn't work for matching query parameters:
http://server/{first}/{last}?maxcount={maxrecords}&offset={offset}
In this case, producing an URI is simple, but for using it as a
pattern, a few questions arise:
* How would one indicate that 'maxcount' and 'offset' are optional
parameters?
* How would one indicate that they are mandatory?
How about using brackets to identify optional parameters?
http://server/{first}/{last}?[maxcount={maxrecords}]&offset={offset}
Would it make sense to combine them? For example to indicate that
both parameters are needed or neither?
http://server/{first}/{last}?[maxcount={maxrecords}&offset={offset}]
How about either/or choices using vertical bar (|)?
http://server/{first}/{last}?[maxcount={maxrecords}&[offset={offset}|page={page}]]
Does this make sense? How have you dealt with matching URIs?
Cheers,
- Steve
--------------
Steve G. Bjorg
http://www.mindtouch.com
http://www.opengarden.org
[1] http://www.ietf.org/internet-drafts/draft-gregorio-uritemplate-00.txt
I'm more interested in the generation of URI, rather than the parsing as I
would use regular expressions to parse.
It would be very handy to have the URI templates support indicating optional
output, especially for query terms.
> -----Original Message-----
> From: rest-discuss@yahoogroups.com
> [mailto:rest-discuss@yahoogroups.com] On Behalf Of Steve G. Bjorg
> Sent: Monday, January 15, 2007 12:04 PM
> To: rest-discuss@yahoogroups.com
> Subject: [rest-discuss] URI Templates & Patterns
>
> I'm looking for some feedback on URI patterns. I just
> recently found out about URI templates [1] and have been
> scratching my head on how they could be extended to also
> provide URI patterns.
>
> For example:
> http://server/{first}/{last}
>
> This URI could be used to produce or consume a URI such as:
> http://server/john/doe
>
> However, the template syntax doesn't work for matching query
> parameters:
> http://server/{first}/{last}?maxcount={maxrecords}&offset={offset}
>
> In this case, producing an URI is simple, but for using it as
> a pattern, a few questions arise:
> * How would one indicate that 'maxcount' and 'offset' are
> optional parameters?
> * How would one indicate that they are mandatory?
>
>
> How about using brackets to identify optional parameters?
> http://server/{first}/{last}?[maxcount={maxrecords}]&offset={offset}
>
> Would it make sense to combine them? For example to indicate
> that both parameters are needed or neither?
> http://server/{first}/{last}?[maxcount={maxrecords}&offset={offset}]
>
> How about either/or choices using vertical bar (|)?
> http://server/{first}/{last}?[maxcount={maxrecords}&[offset={o
> ffset}|page={page}]]
>
> Does this make sense? How have you dealt with matching URIs?
>
>
> Cheers,
>
> - Steve
>
> --------------
> Steve G. Bjorg
> http://www.mindtouch.com
> http://www.opengarden.org
>
>
> [1]
> http://www.ietf.org/internet-drafts/draft-gregorio-uritemplate-00.txt
>
>
>
>
> Yahoo! Groups Links
>
>
>
Hi,
I'd like to continue the question Matthais asked regarding updating
composite and individual resources. I think I basically have the same
question myself and would like to know if this approach might be
considered Un-RESTful?
To add another example, I'm looking at building a RESTful mortgage
application system; its resources might look something like this:
Application
--Applicants
----Applicant
------Address
------EmploymentHistory
------FinancialHistory
--PropertyDetails
----Address
etc...
I can see two ways that our customers may want to use the system:
1) Navigate through the resources, following each link, adding more
and more information as they go.
or
2) Send all the data in one go. Or as much as they have available,
perhaps returning later to navigate through and fill in the gaps.
To accommodate both scenarios a POST to /Application could examine the
amount of data that was sent and create a varying number of child
resources accordingly.
I feel like it "smells a bit" because a GET to /Application would
return a different (i.e. shorter) representation than that which can
be POSTed back.
Does anyone consider this kind of approach to be Un-RESTful? If so,
why, and any suggestions on how it can be made more RESTful?
--- In rest-discuss@yahoogroups.com, "Ernst, Matthias"
<matthias.ernst@...> wrote:
>
> Hi,
>
> one of the cornerstones of RESTful design is the concept of
> hierarchy and containment - such as POST in the sense of "insert new
child".
>
> I'm trying to make use of this for the configuration of a hierarchical
> service container about as follows:
>
> GET /container : answer a representation of the container's
configuration (port, max threads, ...) including links to all services
> PUT /container : reconfigure the container
>
> POST /container: add service
> GET /container/{service}: answer service configuration
> PUT /container/{service}: add/reconfigure service
> DELETE /container/{service}: remove service
>
> So far so good. However, I'm unsure about the significance of the
service configuration links in the container resource. PUTting to the
container resource with fewer service links, should that indicate the
deletion of a service? Or should I ignore those?
>
> Also, I'd like to GET/PUT the _entire_ configuration (container +
services) in one representation, i.e. one that contains the service
configurations in the container configuration instead of links.
> PUTting that representation to /container would then also create,
reconfigure or delete services.
>
> How would you represent that? A different URI? A different
MIME-Type? GET /container with accept:
application/entireconfiguration+xml vs
application/containerconfiguration+xml
>
> Or I could use the same mime type but make the child representations
optional in the schema: GET /container always includes the service
configurations including their individual links, embedded in a
<services> element. PUTting a representation that does not include
said <services> element would only update the container resource.
Including the <services> element would indicate that I want to
configure all services at once, too.
>
> Does that make sense to you?
>
> Thanks
> Matthias
>
Ittay Dror: > this article is what prompted me to submit this thread in the first > place. if not, i would have probably gone the "ebay" way of defining > actions in the url. i also read > http://addsimplicity.typepad.com/adding_simplicity_an_engi/2006/11/the_rest_dialog.html > in which the real architect continues the dialog from his point of view. > what i don't like about Duncan's post is that he addresses only getter > and setter business functions, not real ones, those that actually > create a process. it is very easy to say that instead of getFoo, you > can GET http://example.com/foo. > it is harder when you want to model a doSomething function. Part 3 of the series has a discussion of these 'real business functions' that go beyond simple reading and writing, or getting and setting, data on the server. Anyway, thanks for reading my dialogue articles! I have been hesitant to notify this list of their existence because: (a) they're not complete yet, and (b) they may be seen as a challenge to the direction that REST has gone towards the CRUD view since The Thesis, and I don't want to be seen as in any way confrontational. Plus, I'm aware that I got a ton of traffic to my blog via a kind link from DHH - and he has just converted Rails to CRUD-REST... =0/ However, now I've been linked to on the list itself, I suppose it's pointless not coming over here and defending myself. So - I promote a /symmetric/ REST point of view, with active resources being dependent on each other and conveying state between themselves with either GET or POST depending on which party initiates the transfer. I do hope and believe this pattern is still REST-compatible. Please read part 3 of my series (http://duncan-cragg.org/blog/post/business-functions-rest-dialogues/) for more explanation of this pattern. Of course, I probably end up thereby promoting POST idempotency, but I see that as a good thing. Example: if you've added something to a list, adding it again doesn't do anything. For some function same(resx,resy), of course. Now, Benjamin Carlyle, who should always be heeded, said: > .. REST replaces the doSomething concept with a "make something so" > concept. If you think about it, all doSomething can be modelled > this way. .. "make your state this". This is an example of what I currently call (in my grubby lab notebook) a 'transformation intent' - where some party expresses directly to a resource that they'd like some state to become manifest in or perhaps around that target resource. My heresy (maybe evolution, then) is that I don't see the future of REST being constrained to resources responding to such transformation intents of other parties: I see some (most!) resources transforming /themselves/ in reaction to the state of /other/, peer resources. Allow me to elaborate.. I break REST interaction down into three modes - from dumb to smart, via dependent: -: If a resource receives such a direct transformation request or intent, it may be *dumb* and go ahead and do just what it's told, whenever it's told. That's what I was talking about in parts 1 and 2 of the dialogues. It's a bit like a database. I'd still do without PUT and DELETE, mind, to prevent it being seen that way! -: Alternatively - what seems to be the subject of this thread - it may have *real-world dependency*: maybe it can't just switch to 'running' until the real world thing it models actually /is/ running! So, when it receives a direct transformation intent, it goes off and satisfies that constraint by ensuring it's ticking over in reality, and only then changes its visible state to 'running'. -: Finally, the resource may be *smart*, and decide to switch to 'running' because of the rule that, as long as Joe's resource is running, it should be running itself. So it spots Joe's resource running, and starts running without even being told to! That's what I was talking about in part 3 of my dialogues. The latter is advanced REST programming in transformation rules, I suppose. Spontaneous transformation without any direct transformation intent - indirect implication or even deduction. I haven't directly covered the middle - hidden real-world responsibilities - case so far in my article series. I didn't think settable resources with non-disk-state side-effects were sufficiently common in the usual REST integration world to warrant coverage as yet! Except insofar as it's implied by the first - dumb - case, in that the visible state of a resource obviously shouldn't be set to something for everyone to see until it's actually saved in that state on disk. I also alluded to this sort of thing when talking about email side effects in part 3. A printer comes to mind as another example of such a resource: you may want to take it offline by requesting online/offline state. A print queue resource is related to this. An example which is /not/ like this is a user interface, which 'watches' other state rather than being told what state to achieve directly, and is more like the third, smart, dependent resource, case above. (Actually, in Second Life, I think someone can push your 'avatar resource' and directly change what you can see, but now we're really looking ahead to the future of REST!) I'll get to these examples later in my series. Which may now need to be ten parts =0( Meanwhile, I'd be delighted to hear what you - Roy Fielding - have to say about all this... =0) Cheers! Duncan _________________________________ Duncan Cragg http://duncan-cragg.org/blog/
Steve G. Bjorg wrote:
> I'm looking for some feedback on URI patterns. I just
> recently found out about URI templates [1] and have been
> scratching my head on how they could be extended to also
> provide URI patterns.
FYI, the list to discuss URI Templates is [uri@...]. URI Templates is a
great piece of work.
> For example:
> http://server/{first}/{last}
>
> This URI could be used to produce or consume a URI such as:
> http://server/john/doe
>
> However, the template syntax doesn't work for matching query
> parameters:
> http://server/{first}/{last}?maxcount={maxrecords}&offset={offset}
>
> In this case, producing an URI is simple, but for using it as
> a pattern, a few questions arise:
> * How would one indicate that 'maxcount' and 'offset' are
> optional parameters?
> * How would one indicate that they are mandatory?
>
> How about using brackets to identify optional parameters?
> http://server/{first}/{last}?[maxcount={maxrecords}]&offset={offset}
>
> Would it make sense to combine them? For example to indicate
> that both parameters are needed or neither?
> http://server/{first}/{last}?[maxcount={maxrecords}&offset={offset}]
I'm pretty sure the syntax for option is a trailing question mark within the
braces, i.e.:
http://server/{first}/{last}?maxcount={maxrecords?}&offset={offset?}
The idea here is that if maxrecords is null then all of
"maxcount={maxrecords?}" would be omitted and same for offset (I think. I
hope.)
>
> How about either/or choices using vertical bar (|)?
>
http://server/{first}/{last}?[maxcount={maxrecords}&[offset={offset}|page={p
age}]]
>
I had proposed a comma, but since I proposed it occurred to me that a
vertical bar would be more consistent with other languages use of an "or"
operator so to me, yes it makes sense.
That said, I cc'd [uri@...]; you can review the archive and sign up at
[1].
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org/
"It never ceases to amaze how many people will proactively debate away
attempts to improve the web..."
[1] http://lists.w3.org/Archives/Public/uri/
Duncan Cragg wrote: > > > Ittay Dror: > > > this article is what prompted me to submit this thread in the first > > place. if not, i would have probably gone the "ebay" way of defining > > actions in the url. i also read > > > http://addsimplicit y.typepad. com/adding_ simplicity_ an_engi/2006/ > 11/the_rest_ dialog.html > <http://addsimplicity.typepad.com/adding_simplicity_an_engi/2006/11/the_rest_dialog.html> > > > > in which the real architect continues the dialog from his point of view. > > > what i don't like about Duncan's post is that he addresses only getter > > and setter business functions, not real ones, those that actually > > create a process. it is very easy to say that instead of getFoo, you > > can GET http://example. com/foo. <http://example.com/foo.> > > it is harder when you want to model a doSomething function. > > Part 3 of the series has a discussion of these 'real business functions' > that go beyond simple reading and writing, or getting and setting, data > on the server. good article. let's see if i got it right: the difference of REST design vs RPC (SOAP) is that in REST, the client states what the final state of the resource should be, and the server does whatever it needs to accomplish that. in RPC, the client initiates a process which can change states as it progresses. if this is true, what happens if the server can't reach the declared state? e.g., i have a printer, which i want to put online. i can POST/PUT 'online' to http://example.org/printers/pr1/status. but what if the actual process of making the physical printer fails? won't it be confusing if http://example.org/printers/pr1/status changes to 'error' by the server? it means the client can't be sure that what he posted stays. furthermore, once you allow both the server and client to modify resources, there's a risk of races. (or, maybe the part of the server that changes the resource can be thought of as a client?) > > Anyway, thanks for reading my dialogue articles! I have been hesitant to > notify this list of their existence because: (a) they're not complete > yet, and (b) they may be seen as a challenge to the direction that REST > has gone towards the CRUD view since The Thesis, and I don't want to be > seen as in any way confrontational. > > Plus, I'm aware that I got a ton of traffic to my blog via a kind link > from DHH - and he has just converted Rails to CRUD-REST... =0/ > > However, now I've been linked to on the list itself, I suppose it's > pointless not coming over here and defending myself. > > So - I promote a /symmetric/ REST point of view, with active resources > being dependent on each other and conveying state between themselves > with either GET or POST depending on which party initiates the transfer. > > I do hope and believe this pattern is still REST-compatible. Please read > part 3 of my series > (http://duncan- cragg.org/ blog/post/ business- functions- > rest-dialogues/ > <http://duncan-cragg.org/blog/post/business-functions-rest-dialogues/>) > for more explanation of this pattern. > > Of course, I probably end up thereby promoting POST idempotency, but I > see that as a good thing. Example: if you've added something to a list, > adding it again doesn't do anything. For some function same(resx,resy) , > of course. > > Now, Benjamin Carlyle, who should always be heeded, said: > > > .. REST replaces the doSomething concept with a "make something so" > > concept. If you think about it, all doSomething can be modelled > > this way. .. "make your state this". > > This is an example of what I currently call (in my grubby lab notebook) > a 'transformation intent' - where some party expresses directly to a > resource that they'd like some state to become manifest in or perhaps > around that target resource. > > My heresy (maybe evolution, then) is that I don't see the future of REST > being constrained to resources responding to such transformation intents > of other parties: I see some (most!) resources transforming /themselves/ > in reaction to the state of /other/, peer resources. Allow me to > elaborate.. > > I break REST interaction down into three modes - from dumb to smart, via > dependent: > > -: If a resource receives such a direct transformation request or > intent, it may be *dumb* and go ahead and do just what it's told, > whenever it's told. That's what I was talking about in parts 1 and 2 of > the dialogues. It's a bit like a database. I'd still do without PUT and > DELETE, mind, to prevent it being seen that way! > > -: Alternatively - what seems to be the subject of this thread - it may > have *real-world dependency*: maybe it can't just switch to 'running' > until the real world thing it models actually /is/ running! So, when it > receives a direct transformation intent, it goes off and satisfies that > constraint by ensuring it's ticking over in reality, and only then > changes its visible state to 'running'. > > -: Finally, the resource may be *smart*, and decide to switch to > 'running' because of the rule that, as long as Joe's resource is > running, it should be running itself. So it spots Joe's resource > running, and starts running without even being told to! That's what I > was talking about in part 3 of my dialogues. > > The latter is advanced REST programming in transformation rules, I > suppose. Spontaneous transformation without any direct transformation > intent - indirect implication or even deduction. > > I haven't directly covered the middle - hidden real-world > responsibilities - case so far in my article series. I didn't think > settable resources with non-disk-state side-effects were sufficiently > common in the usual REST integration world to warrant coverage as yet! > > Except insofar as it's implied by the first - dumb - case, in that the > visible state of a resource obviously shouldn't be set to something for > everyone to see until it's actually saved in that state on disk. I also > alluded to this sort of thing when talking about email side effects in > part 3. > > A printer comes to mind as another example of such a resource: you may > want to take it offline by requesting online/offline state. A print > queue resource is related to this. > > An example which is /not/ like this is a user interface, which 'watches' > other state rather than being told what state to achieve directly, and > is more like the third, smart, dependent resource, case above. > > (Actually, in Second Life, I think someone can push your 'avatar > resource' and directly change what you can see, but now we're really > looking ahead to the future of REST!) > > I'll get to these examples later in my series. Which may now need to be > ten parts =0( > > Meanwhile, I'd be delighted to hear what you - Roy Fielding - have to > say about all this... =0) > > Cheers! > > Duncan > > ____________ _________ _________ ___ > Duncan Cragg > http://duncan- cragg.org/ blog/ <http://duncan-cragg.org/blog/> > > -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
On Jan 13, 2007, at 2:30 PM, Benjamin Carlyle wrote: > Let's consider the blueprint case. You want the server to store a > blueprint, so you give your blueprint to the register resource. This > actually sounds more like your container pattern. You POST the > blueprint > to the factory resource. The return verb is "Created", with a location > header that points you to the created resource. You could update the > blueprint with a PUT, and deregister the blueprint by DELETE-ing the > created resource. Your server could be replaced by a range of possible > blueprint-storage systems, and the client would continue to work. It > would just issue its POST to whatever it was configured with. This is so obvious that I'm stunned I missed it. Blueprints should definitively be modeled as a container. > Instantiation also sounds like your container pattern to me. This time > you are posting the state of a new service (which includes > configuration) to a service factory. It would create a resource to > represent the ongoing state of the service. A DELETE would destroy the > service, a PUT would change its state. There is a key difference with instantiation. In the case of blueprints, adding a blueprints did result in the creation of sub- resource. Example: http://server/host/blueprints/my_new_blueprint However, in the case of instantiation, there is no sub-resource assumption. Thus, the location of the new service could be: http://server/my_new_service I think the sub-resource relationship is key to the container patters, because it defines scope of influence. However, I can't apply this restriction to the Host service, which is peer to other services. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
[ Attachment content not displayed ]
I searched but had a hard time isolating a similar topic... I'm working a rest based application and am struggling with the challenge of when to return a resource for edit (if a form) vs when to simply return the resource. I could always present the content in a form if the user is authorized, but this poses a couple of challenges: - additional overhead of populating selection lists when not needed - user may worry that than have changed something when that was not their intent. -Does somebody have experience that would suggest a different URI structure for when doing a GET to edit a resource? -Would it be appropriate to pass a parameter in this case, ie ?edit=yes Thanks,
Hi David, > -Does somebody have experience that would suggest a different URI > structure for when doing a GET to edit a resource? > -Would it be appropriate to pass a parameter in this case, ie ? > edit=yes I don't know if this is the "right" answer, but that's the way that Rails 1.2 does it: http://topfunky.com/clients/peepcode/REST-cheatsheet.pdf Though they use ";edit" as the modifier, to distinguish from queries. -enp On Jan 16, 2007, at 11:41 AM, david.nusbaum wrote: > I searched but had a hard time isolating a similar topic... > > I'm working a rest based application and am struggling with the > challenge of when to return a resource for edit (if a form) vs when to > simply return the resource. I could always present the content in a > form if the user is authorized, but this poses a couple of challenges: > - additional overhead of populating selection lists when not needed > - user may worry that than have changed something when that was not > their intent. > > -Does somebody have experience that would suggest a different URI > structure for when doing a GET to edit a resource? > -Would it be appropriate to pass a parameter in this case, ie ? > edit=yes > > Thanks, > > >
"david.nusbaum" <david.nusbaum@...> writes: > -Does somebody have experience that would suggest a different URI > structure for when doing a GET to edit a resource? > -Would it be appropriate to pass a parameter in this case, ie > ?edit=yes I struggled with this for a while with the app that I am working on. I choose to send the "gui" with any authorized request. I usually have an edit button that displays the gui. It works very well. Authenticated people are not such a caching concern anyway (because they're authenticated it's often the case that the content can't be cached). And it feels more natural to have a single resource controlling it's own destiny as it were. I'll be showing people my app this week so you'll be able to see this working, even if it's still a bit rough and ready. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On Tue, 2007-01-16 at 19:41 +0000, david.nusbaum wrote:
> I'm working a rest based application and am struggling with the
> challenge of when to return a resource for edit (if a form) vs when
You return representations, not resources.
It might make it less confusing to realize that it is the "view" and
"edit" representations that are separate, not the resource.
> -Does somebody have experience that would suggest a different URI
> structure for when doing a GET to edit a resource?
> -Would it be appropriate to pass a parameter in this case,
> ie ?edit=yes
Sure.
- GET /wiki/foo
- [click edit]
- GET /wiki/foo?mode=edit
(or /wiki/foo/editable, or /wiki/foo;edit, or...)
- [make changes]
- POST /wiki/foo
Seems pretty webby and RESTful to me.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org;echo ${a}@${b}
Josh Sled wrote: > On Tue, 2007-01-16 at 19:41 +0000, david.nusbaum wrote: >> I'm working a rest based application and am struggling with the >> challenge of when to return a resource for edit (if a form) vs when > > You return representations, not resources. > > It might make it less confusing to realize that it is the "view" and > "edit" representations that are separate, not the resource. IMO it's perfectly acceptable to have a resource which allows one to edit another resource, and which therefore return representations to and from different URIs.
"Joe Gregorio" <joe@...> writes: > As long as we are being brutally honest about how things work > in *the real world*, I'll just point out that if some VP > discovered that he couldn't edit his blog from his shiny new Nokia phone > because the proxy server blocked PUT requests, then that proxy would > get changed so fast it would make your eyes bleed. Maybe things are different in the states, but here in the UK the idea of a VP who edits his blog from his mobile phone is something of a fiction. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On 1/9/07, Nic James Ferrier <nferrier@...> wrote: > > I think Elliotte is correct that we could fix the problem by getting > the proxy makers to change their proxies. > > However, how long would it take to fix the problem. The big > organization that I was referring to had (amongst others) a Novell > Netware proxy server. It was at least 10 years old. > > I recently made a trip to a medium sized company who were still using > Microsoft Proxy Server 1.0. I don't even want to think about how old > that is. As long as we are being brutally honest about how things work in *the real world*, I'll just point out that if some VP discovered that he couldn't edit his blog from his shiny new Nokia phone because the proxy server blocked PUT requests, then that proxy would get changed so fast it would make your eyes bleed. To put this in perspective we're talking about a configuration option on an HTTP proxy/firewall in a company that 11 years ago was probably running SNA over token ring. To pretend that things will stay the same as they are today is, at best, delusional. -joe -- Joe Gregorio http://bitworking.org
Which HTTP method should be used for safe but confidential operations, in which the query string might reveal data. For example, requesting the current balance for a credit card: http://www.bank.com/statements?number=1234567890984567 Assume HTTP authentication is being used, but we might still wish to avoid shoulder surfing. Is it acceptable to use POST here to hide the number from prying eyes, or should this still be GET? Why or why not? -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Elliotte Harold <elharo@...> writes: > Which HTTP method should be used for safe but confidential operations, > in which the query string might reveal data. For example, requesting the > current balance for a credit card: > > http://www.bank.com/statements?number=1234567890984567 > > Assume HTTP authentication is being used, but we might still wish to > avoid shoulder surfing. > > Is it acceptable to use POST here to hide the number from prying eyes, > or should this still be GET? Why or why not? It should be GET. Your credit card number isn't secret. You hand it over to all sorts of strangers every day. To address the point of shoulder surfing: if someone is standing behind you at an ATM looking over your shoulder do you: a) carry on typing in your pin? b) turn round, write the pin on a piece of paper and give to the surfer? c) stop entering your pin, even if you have to go to another machine somewhere else? And when you get your statement are you still going to be standing there with the shoulder surfer? Because the resource you're looking at may well have the account number in it. How are you going to stop them looking at that? Maybe a sudden distraction? try throwing a cat across the table. I've been asked these sorts of things a lot by people connected with projects. They all seem to come from a desire to be secure... but with no realistic view of security. Security through obscurity is no security etc... The really bad thing about things like that is that they cause architectural problems. I have many times come to a developed system that is broken architecturally only to find out that underlying the break is something spurious that is the way it is because "it just is". Inevitably, it's something like the above. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Elliotte Harold wrote: > Which HTTP method should be used for safe but confidential operations, > in which the query string might reveal data. For example, requesting the > current balance for a credit card: > > http://www.bank.com/statements?number=1234567890984567 > > Assume HTTP authentication is being used, but we might still wish to > avoid shoulder surfing. > > Is it acceptable to use POST here to hide the number from prying eyes, > or should this still be GET? Why or why not? Surely the protocol should be HTTPS not HTTP for such a request? GET should be fine under such circumstances. -- Chris Burdess
Elliotte Harold wrote: > Which HTTP method should be used for safe but confidential operations, > in which the query string might reveal data. For example, requesting the > current balance for a credit card: > > http://www.bank.com/statements?number=1234567890984567 I'd probably build that *completely* differently so that the number used for identification was not the credit card number. At a more general level though: When GET is used over HTTPS the SSL is happening at a lower level than the HTTP (for which reason if HTTPS were registered today it would probably be denied a port number and the IETF would insist that a different mechanism were used to indicate that an encrypted transport protocol was being used underneath the HTTP - the rule against separate port number assignments for "Secure form of X" wasn't in place when HTTPS came on the scene though). Therefore https://www.bank.com/statements?number=1234567890984567 is safe at the transport level. https://www.bank.com/statements/3r421 where 3r421 identifies the account which uses the credit card number 1234567890984567 is safer still. Of course "safe" assumes other reasonable precautions are taken.
On 1/19/07, Nic James Ferrier <nferrier@...> wrote: > Elliotte Harold <elharo@...> writes: > > > Which HTTP method should be used for safe but confidential operations, > > in which the query string might reveal data. For example, requesting the > > current balance for a credit card: > > > > http://www.bank.com/statements?number=1234567890984567 > > > > Assume HTTP authentication is being used, but we might still wish to > > avoid shoulder surfing. > > > > Is it acceptable to use POST here to hide the number from prying eyes, > > or should this still be GET? Why or why not? > > It should be GET. Seems a bit pendantic. Just because GET is defined to be safe does not mean you cannot do safe operations with POST. And the benefits of GET (e.g. cacheability) don't seem to apply here.
Bob Haugen wrote: > Seems a bit pendantic. Just because GET is defined to be safe does not > mean you cannot do safe operations with POST. And the benefits of GET > (e.g. cacheability) don't seem to apply here. That's not the only benefit of GET and caching could apply - not a shared cache obviously, but a secure private cache isn't an impossibility. You can do safe operations with POST, but code blocking all potentially unsafe operations would block it.
On Jan 19, 2007, at 5:26 AM, Chris Burdess wrote: > Elliotte Harold wrote: >> Which HTTP method should be used for safe but confidential >> operations, >> in which the query string might reveal data. For example, >> requesting the >> current balance for a credit card: >> >> http://www.bank.com/statements?number=1234567890984567 >> >> Assume HTTP authentication is being used, but we might still wish to >> avoid shoulder surfing. >> >> Is it acceptable to use POST here to hide the number from prying >> eyes, >> or should this still be GET? Why or why not? > > Surely the protocol should be HTTPS not HTTP for such a request? GET > should be fine under such circumstances. > One other place the credit card number will show up, though, is in the browser's history. If someone were to use this from a public terminal and not clear the history, then the next person along could get the credit card number by looking at the history. Thus I'd lean towards some other approach, such as POST the credit card number, and redirect to a GET that includes a different "surrogate" number, not the credit card number, that on the server is mapped to the credit card account. The user would need to authenticate to be able to see that page, so even though that surrogate number would be visible in the browser cache, someone else who came along wouldn't be able to access it unless the previous fellow forgot to log out. (Which is still a possibility.) Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Chris Burdess wrote: > Surely the protocol should be HTTPS not HTTP for such a request? GET > should be fine under such circumstances. That only solves the Bob problem. It does nothing about Dave standing around watching Alice read her screen. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On Fri, Jan 19, 2007 at 07:52:20AM -0500, Elliotte Harold wrote:
> Which HTTP method should be used for safe but confidential operations,
> in which the query string might reveal data. For example, requesting the
> current balance for a credit card:
>
> http://www.bank.com/statements?number=1234567890984567
>
> Assume HTTP authentication is being used, but we might still wish to
> avoid shoulder surfing.
> Is it acceptable to use POST here to hide the number from prying eyes,
> or should this still be GET? Why or why not?
I think that you should still be using GET here. However, you could
encrypt the query-string in order to render it unusable without the key,
then decrypt on the appserver. Ideally, the key would be based on
criteria that would be hard to reproduce, eg: session start time,
session id, IP address, &c.
--
Ceri Storey <cez@...>
'What I really want is "apt-get smite"'
--Rob Partington
http://unix.culti.st/
On Fri, 2007-01-19 at 15:10 +0000, Jon Hanna wrote: > Bob Haugen wrote: > > Seems a bit pendantic. Just because GET is defined to be safe does > not > > mean you cannot do safe operations with POST. And the benefits of > GET > > (e.g. cacheability) don't seem to apply here. > > That's not the only benefit of GET and caching could apply - not a > shared cache obviously, but a secure private cache isn't an > impossibility. > > You can do safe operations with POST, but code blocking all > potentially > unsafe operations would block it. It seems that Jon Hanna hit the nail on the head by suggesting secret data should be substituted for a non-secret hash when used in identifiers. However, I suggest that secure communications is one of those areas where the assumptions that REST are based on do not hold so tightly. Secret data has fewer scalability concerns than widely known data, because secrecy itself is not scalable. You don't expect a million or even a thousand clients to all ask for the statement on a particular credit card. You expect maybe 10 hits per day for a busy account. If a lot of people have access to a particular secret, it isn't very secret. This is why SSL is often the right answer to secure messaging, despite limiting the application of caches and the use of intermediataries for other purposes. Benjamin
On Tue, 2007-01-16 at 17:07 -0500, Josh Sled wrote: > On Tue, 2007-01-16 at 19:41 +0000, david.nusbaum wrote: > > I'm working a rest based application and am struggling with the > > challenge of when to return a resource for edit (if a form) vs when > You return representations, not resources. > It might make it less confusing to realize that it is the "view" and > "edit" representations that are separate, not the resource. I would argue that the reason for having different representations available from a resource is to provide different levels of semantic fidelity based on client capabilities. Across these representations I would expect essentially the same information to be returned. Here, I think you are talking about one representation that returns resource content and another representation that returns a form to edit that content. Both representations would presumably be HTML, so they don't sound like representations of the same resource. They convey different information using the same content type. It sounds like they would require different urls. The two resources are related. It is likely the share state on the server. However, hyperlinking to one is a very different matter than hyperlinking to the other. You indicate as such here: > - GET /wiki/foo > - [click edit] > - GET /wiki/foo?mode=edit > (or /wiki/foo/editable, or /wiki/foo;edit, or...) > - [make changes] > - POST /wiki/foo An alternative design would be to include the edit form as part of the main representation, perhaps with its visible attribute set to false when javascript is available. A javascript toggle on the page might make the edit form visible and allow submission back to the server. This single response document design would only need one url. Benjamin.
G'day, I have been thinking about the idempotency of POST lately, and the exchange with Steve Bjorg has prompted me to write about it. My current direction is to treat a POST of null to a factory resource as idempotent. A null POST to a factory resource would create a resource that could be PUT to safely. Either the null POST or the PUT could be repeated safely if they time out. I'm cautious about using the POE specification due to the way that it seems to use up the POST method for the created resource, and doesn't really define a method for creating the temporary resource in the first place. Null POST idempotency seems to me to be an appropriate way of thinking about the problem: Add no state to be demarcated by a temporary resource, then replace that null state with the content I would have otherwise POSTed. The PUT also converts the temporary resource into a resource with a normal lifetime. Potential issues: 1. Idempotency of null POST My biggest concern is about encoding the idempotency assumption into the client. If it happened to be unsafe for a particular server, the client could trigger unfortunate behaviour. For this reason it may be useful to introduce a new method that performs this setup. I'm thinking something like PREPARE might be appropriate. 2. Temporary Resource lifetime Temporary resources would need to expire after some time to allow the server to reclaim memory and other... well... resources. I suggest that a rough timer would be the simplest approach: Sweep every 40s and destroy temporary resources that are still present from the last sweep. The exact mechanism and timing would be up to the server. 3. Temporary Resources cleaned up too soon The client behaviour on seeing a 404 come back from PUT might be an issue. Did the PUT succeed and has the resource been destroyed by normal means, or did the resource timeout before a successful PUT? It may be necessary to prevent a live resource from being converted to a 404 too quickly. Perhaps keeping a 410 up for at least the normal lifetime of the temporary would indicate to the client that its earlier PUT was successful. This temporary resource lifetime would have to be long enough to ensure all reasonable client activity worked correctly. If the client sees a 404 it has no option but to return an error to its user indicating that it doesn't know whether the PUT occured or not. 4. Overconsumption of server-side state The final question is how to deal with clients that issue too many PREPARE requests, thereby consuming unreasonable quanitities of server-side resources. It may be important to impose a maximum PREPARE rate based on source IP or other request characteristic. Thoughts? Benjamin.
Steve, On Tue, 2007-01-16 at 07:05 -0800, Steve Bjorg wrote: > On Jan 13, 2007, at 2:30 PM, Benjamin Carlyle wrote: > > Instantiation also sounds like your container pattern to me. This > time > > you are posting the state of a new service (which includes > > configuration) to a service factory. It would create a resource to > > represent the ongoing state of the service. A DELETE would destroy > the > > service, a PUT would change its state. > There is a key difference with instantiation. In the case of > blueprints, adding a blueprints did result in the creation of sub- > resource. > Example: > http://server/host/blueprints/my_new_blueprint > However, in the case of instantiation, there is no sub-resource > assumption. Thus, the location of the new service could be: > http://server/my_new_service > I think the sub-resource relationship is key to the container > patters, because it defines scope of influence. However, I can't > apply this restriction to the Host service, which is peer to other > services. Maybe to a human designer on the server side, but in practice from all perspectives... I don't think so. You might want to give the pattern a different name when the resources aren't given a subpath under the factory resource, but the mechanism and concept is the same. For this reason I usually talk about factories rather than containers. The state may be demarcated by resources hosted under the factory's path, or elsewhere (Created). The new state may be added to the factory resource directly, or given no client-visible handle at all (OK, or No Content). I have heard about the container pattern described several times, but it only really makes sense when we don't arbitrarily require new resources to live under the resource we POSTed to. Clients certainly can't assume it will be so. The semantics of POST don't include the restriction, and it is inadvisible to communicate to the client that they can make the assumption. After all, you might change your mind later. The Location header in a POST could direct the client anywhere. It might be a partner website. It might contain an authority that takes the client to a different server cluster within your data centre. The client may choose not to apply the same trust metrics to a new authority, or even to a different path structure under the same authority. The server should take this into account as part of the design. However, the freedom to decide where the new resource goes is part of the prerogative of the server. Benjamin.
On Tue, 2007-01-16 at 09:11 +0200, Ittay Dror wrote: > let's see if i got it right: the difference of REST design vs RPC > (SOAP) is that in REST, the client states what the final state of the > resource should be, and the server does whatever it needs to > accomplish that. in RPC, the client initiates a process which can > change states as it progresses. Well, to be clear... HTTP REST clients state what the final state of the resources should be, and the server does whatever it needs to accomplish that. In RPC, the client and server can do what they like. The REST difference is that the request can be understood by arbitrary components in the network. This includes intermediataries of various types, as well as more generic clients like browsers that were not written for the sole purpose of having this specific conversation. > if this is true, what happens if the server can't reach the declared > state? e.g., i have a printer, which i want to put online. i can > POST/PUT 'online' to http://example.org/printers/pr1/status. but what > if the actual process of making the physical printer fails? won't it > be confusing if http://example.org/printers/pr1/status changes to > 'error' by the server? it means the client can't be sure that what he > posted stays. furthermore, once you allow both the server and client > to modify resources, there's a risk of races. (or, maybe the part of > the server that changes the resource can be thought of as a client?) That is a server error. 500 is a good default, though if the reason for the failure is known you could be more specific. 502 Bad Gateway, and 504 Gateway Timeout can be useful codes. Benjamin.
On Tue, 2007-01-16 at 01:45 +0000, Duncan Cragg wrote: > So - I promote a /symmetric/ REST point of view, with active > resources > being dependent on each other and conveying state between themselves > with either GET or POST depending on which party initiates the > transfer. > > I do hope and believe this pattern is still REST-compatible. Please > read > part 3 of my series > (http://duncan-cragg.org/blog/post/business-functions-rest-dialogues/) > for more explanation of this pattern. > > Of course, I probably end up thereby promoting POST idempotency, but > I > see that as a good thing. Example: if you've added something to a > list, > adding it again doesn't do anything. For some function > same(resx,resy), > of course. I think you are talking about making the POST idempotent by including a unique identifier in the request content. If so, it sounds like an application-specific form of idempotency. The server understands the message, and when it goes to file it away it notices it already has a matching record. This approach is often valid from the server side, however the client is not in a particularly good position to predict what whether its request will be processed multiple times. I haven't read your content in detail as yet, but you also seem to be including a pub/sub mechanism in your model. Again without knowing how much of this you have covered exactly, subscription also has its complications :) > I break REST interaction down into three modes - from dumb to smart, > via > dependent: > -: If a resource receives such a direct transformation request or > intent, it may be *dumb* and go ahead and do just what it's told, > whenever it's told. That's what I was talking about in parts 1 and 2 > of > the dialogues. It's a bit like a database. I'd still do without PUT > and > DELETE, mind, to prevent it being seen that way! So, a flat file or equivalent that doesn't have any overlap with other resources. > -: Alternatively - what seems to be the subject of this thread - it > may > have *real-world dependency*: maybe it can't just switch to 'running' > until the real world thing it models actually /is/ running! So, when > it > receives a direct transformation intent, it goes off and satisfies > that > constraint by ensuring it's ticking over in reality, and only then > changes its visible state to 'running'. I'm a SCADA guy, so this is a kind of resource that comes frequently to mind for me. This kind of resource can have knock-on effects also. If I start a fan in a chiller plant for a building I am likely to see changes to the resources demarcating temperature guage state. These changes slip between resources via the implementation of these resources, specifically the monitoring of changes to real world conditions. > -: Finally, the resource may be *smart*, and decide to switch to > 'running' because of the rule that, as long as Joe's resource is > running, it should be running itself. So it spots Joe's resource > running, and starts running without even being told to! That's what I > was talking about in part 3 of my dialogues. I suspect this is also the kind of resource that models most business functions... though I would like to cut to the specifics. I see a set of resources as an API to a service that expose its functionality in an architecturally-consistent way. Importantly, they are not services in their own right. They share state with each other, but this is not the same as communicating with each other by RESTful means. They are implemented with objects or with embedded database procedures. These implementation-level entities talk to each other. That interaction is what affects the service's resources. So you have a service that is managing which other services/devices/functions are running in its system. It observes a change in one, and starts the other. The actual observation could be an object notifying others via an observer pattern, it could be a process starting or stopping and generating a SIGCHILD. It could be a resource monitored via a configured or pub/sub notification mechanism or by GET polling. The systems I work with tend to have a lot of pub/sub relationships to trigger knock-on behaviours between services. This is necessary because changes to the real world are unpredictable to even the most aware components in the architecture. Within a service we would typicically be talking about the observer pattern. Between services and their cluster management software we would be talking about the cluster keeping track of which processes have started and not yet died. Benjamin.
Benjamin Carlyle wrote: > > > G'day, > > I have been thinking about the idempotency of POST lately, and the > exchange with Steve Bjorg has prompted me to write about it. My current > direction is to treat a POST of null to a factory resource as > idempotent. > [...] > Thoughts? Start here: http://www.mnot.net/drafts/draft-nottingham-http-poe-00.txt http://www.dehora.net/doc/httplr/draft-httplr-01.html cheers Bill
Taylor Parsons wrote: > Why would you not use HTTPS? As a user I would *never* enter my credit > card number that was not on HTTPS. I am relatively new to use REST, is > there something that would prevent you from using HTTPS? > One more time everyone: THIS HAS NOTHING TO DO WITH THE CREDIT CARD NUMBER BEING SNIFFED IN TRANSIT. THIS IS ABOUT THE CARD NUMBER (or other confidential data) BEING VIEWED IN THE BROWSER'S LOCATION BAR LOCALLY. HTTPS DOES NOTHING TO SOLVE THIS. Several people have suggested obscuring the card number, but unfortunately the user still has to type it in, and there's no easy way to change it. I think a lot of folks are missing the point. There is a lot of data that might go in a secret field but is not a password. Can we GET such data? -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
> One more time everyone: > > THIS HAS NOTHING TO DO WITH THE CREDIT CARD NUMBER BEING SNIFFED IN TRANSIT. > > THIS IS ABOUT THE CARD NUMBER (or other confidential data) BEING VIEWED > IN THE BROWSER'S LOCATION BAR LOCALLY. In which case its got bugger all to do with http(s) transmission then. No? -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
Elliotte Harold wrote: > One more time everyone: > > THIS HAS NOTHING TO DO WITH THE CREDIT CARD NUMBER BEING SNIFFED IN TRANSIT. > > THIS IS ABOUT THE CARD NUMBER (or other confidential data) BEING VIEWED > IN THE BROWSER'S LOCATION BAR LOCALLY. The thing is, credit card numbers are not secrets. I mean, they're printed on the front of the credit card in big readable digits. -- Chris Burdess
Chris Burdess <dog@...> writes: > Elliotte Harold wrote: >> One more time everyone: >> >> THIS HAS NOTHING TO DO WITH THE CREDIT CARD NUMBER BEING SNIFFED IN TRANSIT. >> >> THIS IS ABOUT THE CARD NUMBER (or other confidential data) BEING VIEWED >> IN THE BROWSER'S LOCATION BAR LOCALLY. > > The thing is, credit card numbers are not secrets. I mean, they're > printed on the front of the credit card in big readable digits. And you give them to really high risk people every time you go into a restaurant or petrol (that's "gas" to you) station. Honestly, I know of a lot more fraud involving physical handing over of the card than anything remotely clever like shoulder viewing or electronic stealing. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Elliotte Harold <elharo@...> writes: > THIS HAS NOTHING TO DO WITH THE CREDIT CARD NUMBER BEING SNIFFED IN TRANSIT. > > THIS IS ABOUT THE CARD NUMBER (or other confidential data) BEING VIEWED > IN THE BROWSER'S LOCATION BAR LOCALLY. > > HTTPS DOES NOTHING TO SOLVE THIS. > > Several people have suggested obscuring the card number, but > unfortunately the user still has to type it in, and there's no easy way > to change it. > > I think a lot of folks are missing the point. There is a lot of data > that might go in a secret field but is not a password. Can we GET > such data? Yes. Users must (and mostly do, I think) understand the implications of shoulder surfing. They already understand them with ATMs. Web application designers have to take on board some of the things that ATM designers have taken on board (go look at an ATM and notice all the measures they've taken to reduce shoulder surfing potential). For example, if you put the credit card near the bottom of the screen it's more difficult the read. You can try this with someone standing behind you. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On 1/21/07, Nic James Ferrier <nferrier@...> wrote: > Yes. Users must (and mostly do, I think) understand the implications > of shoulder surfing. Uh, I understand the general point of what you're saying, but, umm, if you sat next to me and I put my creditcard into the URL, would you be able to remember it? It's a long number. :) I guess one can say this is security by obscurity, as someone might use their cameraphone and snap a picture. > They already understand them with ATMs. Well, actually they don't as you don't put in your credit-card number in those machines. I'm even inclined to say that you could read your number out loud to someone slowly, and they still wouldn't remember it. But back to the point, and here's what I would do to hide the card-number in following transactions ; GET https://bank.com/card/1234567890 does a 30x redirect to ; https://bank.com/statement/$1234 which is a hashed version of whatever the card "1234567890" is linked to (which could be a session indicator, a bank account, etc. depending on how your internal system is designed). If POSTing secret form data is the issue (as someone has pointed out), I would connect the form to a hash of a login session, and pass the hash in. So ; GET http://bank.com/login (returns form, with action to the URL below) <form action="https://bank.com/session" method="post"> Create a session<br /> <input type="text" name="username" /> <input type="secret" name="password" /> </form> POST https://bank.com/session (create session: returns 30x + session hash $1234) now you can use the hash to ; GET https://bank.com/statement/$1234 until ; DELETE https://bank.com/session/$1234 (or passed through by some form) Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
Chris Burdess wrote: > The thing is, credit card numbers are not secrets. I mean, they're > printed on the front of the credit card in big readable digits. How about a social security number then? I think people are getting too wrapped up in the details and missing the forest for the trees. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Dave Pawson wrote: > In which case its got bugger all to do with http(s) transmission then. > > No? > No. What it has to do with is the choice of GET vs. POST which is quite relevant to HTTP transmission. The REST dogma is that safe operations should use get, where safe is defined as not changing server state or committing the user in any significant way. E.g. merely reading a page. The question at hand is whether that dogma shoudl or shoudl not be modified to also take into account the sensitivity of the data in the query string that is exposed in the browser location bar, the bookmarks, the history, and other places. The example of such data (and only an example) which I have offered is a credit card statement. Accessing the statement requires providing the credit card number, name on card, and so forth. This is sensitive information but the operation is not unsafe in the traditional sense. Merely reading one's statement is different than purchasing an item, which would be done with POST. Is there a legitimate argument that this safe operation should nonetheless be performed with POST for reasons of security? (If you don't like the credit card example, please feel free to substitute your own example of non-password sensitive data that should not be trivially exposed to prying eyes.) -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Elliotte Harold <elharo@...> writes: > Chris Burdess wrote: > >> The thing is, credit card numbers are not secrets. I mean, they're >> printed on the front of the credit card in big readable digits. > > How about a social security number then? I think people are getting too > wrapped up in the details and missing the forest for the trees. Each case on it's merits. You asked about credit card numbers. If we're talking about a range of differenct types of number then: I wouldn't like to use a system that put my PIN number in a GET. But any "username" (account numbers, id numbers, etc...) would be ok. It's no different to the physical world. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Nic James Ferrier wrote: > If we're talking about a range of differenct types of number then: I > wouldn't like to use a system that put my PIN number in a GET. But > any "username" (account numbers, id numbers, etc...) would be ok. It's > no different to the physical world. > A PIN or password can be sent with HTTP authentication. However many requests may involve more than just one confidential datum. The question remains: Is it reasonable to use POST for a safe operation that transmits confidential data in the query string? I can see arguments on both sides of the question. However, I'm a little surprised at the amount of effort I'm having to expend to get people to consider the question at all. The trouble people are going to evade it suggests to me that this is a very uncomfortable point for a lot of folks, and maybe the REST model (or at least HTTP) has some trouble here. So far I think the two or three folks who've actually addressed the question square on have come down in favor of using POST despite the safety of the operation. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 1/21/07, Elliotte Harold <elharo@...> wrote: > I can see arguments on both sides of the question. However, I'm a little > surprised at the amount of effort I'm having to expend to get people to > consider the question at all. The trouble people are going to evade it > suggests to me that this is a very uncomfortable point for a lot of > folks, and maybe the REST model (or at least HTTP) has some trouble here. As I understand your question, you are purely concerned with shoulder-surfing. If so, then your issue is technically with the browser rather than RESTful architecture. > So far I think the two or three folks who've actually addressed the > question square on have come down in favor of using POST despite the > safety of the operation. There seems to be a general consensus across the thread that the high-level solution is to encode the sensitive data string on the querystring. This this can be achieved in a number of ways: 1. POST the value(s) and the server redirects the browser to a URI with the encoded value(s) 2. Encode the value(s) in the browser, either with local javascript or via a an AJAX server call, and then update the html hrefs using the DOM. Don't forget the third alternative: 3. Redesign the web application to avoid the need to use sensitive value(s) on a GET. Regards, Alan Dean
Alan Dean wrote: > There seems to be a general consensus across the thread that the > high-level solution is to encode the sensitive data string on the > querystring. That's not generally possible, though. :-( > This this can be achieved in a number of ways: > 1. POST the value(s) and the server redirects the browser to a URI > with the encoded value(s) In which case you're using POST, which may indeed be the right answer; but I don't think it's RESTful. > 2. Encode the value(s) in the browser, either with local javascript or > via a an AJAX server call, and then update the html hrefs using the > DOM. Bleah. JavaScript cannot in general be relied on. If you have to rely on it, REST is already broken. > Don't forget the third alternative: > 3. Redesign the web application to avoid the need to use sensitive > value(s) on a GET. Easy enough to do, but then we're back to a non-RESTful POST where you should normally use GET. Maybe there are ways around this, but they're really jesuitical. For instance, you could only ask the user for their credit card info as part of an unsafe operation. Then you could assign them the unique ID that is not the credit card number for future use with GET. However, you would not allow the user to check their balance unless they'd done something else unsafe first. e.g. agreeing to a contract -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
> > So far I think the two or three folks who've actually addressed the > > question square on have come down in favor of using POST despite the > > safety of the operation. > > There seems to be a general consensus across the thread that the > high-level solution is to encode the sensitive data string on the > querystring. That still avoids the question of whether it is wrong to use POST for safe operations. I still haven't see any principled (architectural/technical) argument for that position. Yes, you will lose some of the benefits of GET, but I assume you make these choices with your eyes open, and don't care. What does it hurt to use POST for a query?
On 1/21/07, Elliotte Harold <elharo@...> wrote: > Alan Dean wrote: > > > There seems to be a general consensus across the thread that the > > high-level solution is to encode the sensitive data string on the > > querystring. > > That's not generally possible, though. :-( Why do you assert it is not generally possible? I listed several means to achieve it. > > This this can be achieved in a number of ways: > > 1. POST the value(s) and the server redirects the browser to a URI > > with the encoded value(s) > > In which case you're using POST, which may indeed be the right answer; > but I don't think it's RESTful. POST is not antithetical to REST. POST has the fuzziest meaning, to be sure, but is not without meaning. Why is a POST-initiated redirect not RESTful? Furthermore, the target URI will be both RESTful and encoded/obscured. > > > 2. Encode the value(s) in the browser, either with local javascript or > > via a an AJAX server call, and then update the html hrefs using the > > DOM. > > Bleah. JavaScript cannot in general be relied on. If you have to rely on > it, REST is already broken. Your conclusion does not follow on from the premise, I think. > > Don't forget the third alternative: > > 3. Redesign the web application to avoid the need to use sensitive > > value(s) on a GET. > > Easy enough to do, but then we're back to a non-RESTful POST where you > should normally use GET. Maybe there are ways around this, but they're > really jesuitical. For instance, you could only ask the user for their > credit card info as part of an unsafe operation. Then you could assign > them the unique ID that is not the credit card number for future use > with GET. However, you would not allow the user to check their balance > unless they'd done something else unsafe first. e.g. agreeing to a contract Perhaps you have misunderstood my point, which is that you could design the URI 'representation space' without recourse to using the credit card number at all. If you have authenticated the user (which you have already acknowledged is out-of-scope to your question) then the server can be aware of the credit card without propagating that knowledge to the client (indeed, I would argue that to do so would be bad security design). Regards, Alan Dean
On 1/21/07, Bob Haugen <bob.haugen@...> wrote: > > > So far I think the two or three folks who've actually addressed the > > > question square on have come down in favor of using POST despite the > > > safety of the operation. > > > > There seems to be a general consensus across the thread that the > > high-level solution is to encode the sensitive data string on the > > querystring. > > That still avoids the question of whether it is wrong to use POST for > safe operations. I specifically stated that I was answering the question of shoulder-surfing the browser address bar. > > I still haven't see any principled (architectural/technical) argument > for that position. > > Yes, you will lose some of the benefits of GET, but I assume you make > these choices with your eyes open, and don't care. > > What does it hurt to use POST for a query? Nothing in the HTTP spec prevents using POST as a safe operation. What the spec prohibits is using GET/HEAD for *unsafe* operations. "In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested." http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html Alan
Elliotte Harold <elharo@...> writes: > I can see arguments on both sides of the question. However, I'm a little > surprised at the amount of effort I'm having to expend to get people to > consider the question at all. The trouble people are going to evade it > suggests to me that this is a very uncomfortable point for a lot of > folks, and maybe the REST model (or at least HTTP) has some trouble > here. > > So far I think the two or three folks who've actually addressed the > question square on have come down in favor of using POST despite the > safety of the operation. I don't think that's fair. When you were talking about the use case of credit card numbers this was a very familiar anti-pattern to me. Something that people suggest is confidential actually, on analysis, turns out not to be. I cannot think of a use case of the kind of thing that you're talking about. Some things are confidential. Indeed I wouldn't want someone to look over my shoulder and notice the contents of many web pages that I look at on a daily basis (bank account, visa account, telephone bills, etc...) But I don't see that any of that has anything to do with GET or POST. What's in the URL bar is the least of my concerns. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Elliotte Harold wrote: > I can see arguments on both sides of the question. However, I'm a little > surprised at the amount of effort I'm having to expend to get people to > consider the question at all. The trouble people are going to evade it > suggests to me that this is a very uncomfortable point for a lot of > folks, and maybe the REST model (or at least HTTP) has some trouble here. Oh please, you sound like a witchfinder. Provide a concrete example where you point would matter. If there's really a forest it can't be hard to see. cheers Bill
Elliotte's question is very valid, I think -- even if *you* don't have a problem with your credit card number being more or less public (I know I don't), protecting it is a typical user requirement. If you try to argue with a customer that what they want to see protected is not confidential at all, you've already lost the argument :-) Still, I believe the "best" answer is that REST (or at least HTTP) relies on identifiers that are not secrets. If your identifiers are (or you consider them to be) secret, don't use them as URIs. My personal favorite solution would be to create a new resource representing an application-specific "account" via posting the credit card number, and return a non-meaningful URI for that newly created (and reasonably protected) resource. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On Jan 22, 2007, at 12:56 AM, Bill de hOra wrote: > Elliotte Harold wrote: > > > I can see arguments on both sides of the question. However, I'm a > little > > surprised at the amount of effort I'm having to expend to get > people to > > consider the question at all. The trouble people are going to > evade it > > suggests to me that this is a very uncomfortable point for a lot of > > folks, and maybe the REST model (or at least HTTP) has some > trouble here. > > Oh please, you sound like a witchfinder. Provide a concrete example > where you point would matter. If there's really a forest it can't be > hard to see. > > cheers > Bill
Bill de hOra wrote: > Oh please, you sound like a witchfinder. But a witchfinder that depends upon the witches using Craft names in public or public names in circle during a time of persecution instead of using an identifier appropriate to the context :) The design flaw in http://example.net/doSomethingWith?id=sensitiveIdentifier isn't at the level of the application design that REST deals with.
> THIS HAS NOTHING TO DO WITH THE CREDIT CARD NUMBER BEING SNIFFED IN TRANSIT. > > THIS IS ABOUT THE CARD NUMBER (or other confidential data) BEING VIEWED > IN THE BROWSER'S LOCATION BAR LOCALLY. Have the server do a redirect on the initial request. This way you have the ability to perform a GET on the well-known URL, but it wouldn't actually appear in the browser's address bar.
One more thing: The URL redirected to could be different on every request of the original URL (encoded with using some random number and could be made time-sensitive) -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jeffrey Winter Sent: Monday, January 22, 2007 9:06 AM To: Elliotte Harold; REST Discuss Subject: RE: [rest-discuss] Safe but secret > THIS HAS NOTHING TO DO WITH THE CREDIT CARD NUMBER BEING SNIFFED IN TRANSIT. > > THIS IS ABOUT THE CARD NUMBER (or other confidential data) BEING VIEWED > IN THE BROWSER'S LOCATION BAR LOCALLY. Have the server do a redirect on the initial request. This way you have the ability to perform a GET on the well-known URL, but it wouldn't actually appear in the browser's address bar.
Jeffrey Winter wrote: > >> THIS HAS NOTHING TO DO WITH THE CREDIT CARD NUMBER BEING SNIFFED IN > TRANSIT. >> THIS IS ABOUT THE CARD NUMBER (or other confidential data) BEING > VIEWED >> IN THE BROWSER'S LOCATION BAR LOCALLY. > > Have the server do a redirect on the initial request. This way you > have the ability to perform a GET on the well-known URL, but it wouldn't > > actually appear in the browser's address bar. Sometimes happens with redirects. Just use a safe identifier.
Stefan Tilkov wrote: > Elliotte's question is very valid, I think yes; it's the conclusion re "evading" I have a problem with. cheers Bill
I'd like to publish some content in different formats. I think I'd like to use URLs as follows: http://example.com/docs/doc for "the resource"; and atom, csv, rdf, zip: http://example.com/docs/doc/atom http://example.com/docs/doc/csv http://example.com/docs/doc/rdf appended for "formats". Probably I'll use rel tags/URI templates and a howto page to document that you can get at representations in multiple forms. I know there's conneg at the HTTP level, but it seems to have miserably, utterly, and completely failed* on the web. Thoughts? cheers Bill * http://www.xml.com/pub/a/2004/07/21/dive.html
>> Have the server do a redirect on the initial request. This way you >> have the ability to perform a GET on the well-known URL, but it wouldn't >> >> actually appear in the browser's address bar. > > Sometimes happens with redirects. Just use a safe identifier. I'm not sure what you're getting at here. It seems to me that redirecting from a GET of a "well-known" URL to a randomized, possibly time-sensitive URL addresses all the issues brought up in the original posting.
FWIW, this came over the transom today: http://www.artima.com/weblogs/viewpost.jsp?thread=192218 -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Bill de hOra Sent: Monday, January 22, 2007 10:20 AM To: Rest List Subject: [rest-discuss] using multiple urls for formats I'd like to publish some content in different formats. I think I'd like to use URLs as follows: http://example.com/docs/doc <http://example.com/docs/doc> for "the resource"; and atom, csv, rdf, zip: http://example.com/docs/doc/atom <http://example.com/docs/doc/atom> http://example.com/docs/doc/csv <http://example.com/docs/doc/csv> http://example.com/docs/doc/rdf <http://example.com/docs/doc/rdf> appended for "formats". Probably I'll use rel tags/URI templates and a howto page to document that you can get at representations in multiple forms. I know there's conneg at the HTTP level, but it seems to have miserably, utterly, and completely failed* on the web. Thoughts? cheers Bill * http://www.xml.com/pub/a/2004/07/21/dive.html <http://www.xml.com/pub/a/2004/07/21/dive.html>
Quoting Bill de hOra <bill@...>: > I'd like to publish some content in different formats. I think I'd like > to use URLs as follows: > > http://example.com/docs/doc > > for "the resource"; and atom, csv, rdf, zip: > > http://example.com/docs/doc/atom > http://example.com/docs/doc/csv > http://example.com/docs/doc/rdf > > appended for "formats". Probably I'll use rel tags/URI templates and a > howto page to document that you can get at representations in multiple > forms. > > I know there's conneg at the HTTP level, but it seems to have miserably, > utterly, and completely failed* on the web. > > Thoughts? A lot of the time when people publish human and machine-readable formats, conneg doesn't seem appropriate, because the formats are really not substitutable for each other. Eg, imagine the surprise that someone gets when they try to run XSLT over an XML representation but get a zip file because their XSLT processor isn't setting the Accept headers in exactly the way that the server expects. I think that using multiple URLs is a good thing in that sort of scenario, but a weakness is that they become more difficult to manage. Agents and languages like RDF can't easily tell whether two formats refer to the same concept. A compromise is to additionally provide a URL for the concept, eg: "http://example.com/docs/doc", that 303-redirects to an appropriate representation, which also has the advantage of being compatible with the httpRange-14 resolution. [1] Using the Link header [2] to point from the information resource (the file), to the non-information resource (the concept), seems like a good idea, but I'm not sure whether any of the predefined rel values are appropriate. How about Link: <http://example.com/docs/doc>; rev=alternate (notice the rev instead of rel) [1] http://lists.w3.org/Archives/Public/www-tag/2005Jun/0039.html [2] http://www.apps.ietf.org/rfc/rfc2068.html#sec-19.6.2.4 -- Dave
Bill de hOra wrote:
> I'd like to publish some content in different formats. I think I'd like
> to use URLs as follows:
>
> http://example.com/docs/doc
>
> for "the resource"; and atom, csv, rdf, zip:
>
> http://example.com/docs/doc/atom
> http://example.com/docs/doc/csv
> http://example.com/docs/doc/rdf
>
> appended for "formats". Probably I'll use rel tags/URI templates and a
> howto page to document that you can get at representations in multiple
> forms.
Why not use the idiomatic dot instead of slash here, Bill? e.g.,
http://example.com/docs/doc.atom
http://example.com/docs/doc.csv
http://example.com/docs/doc.rdf
This has a couple of practical benefits:
* `curl -O <URL>` (or Right Clicky -> Save As) does the Right
Thing.
* When you need to cache this stuff to disk, you can just use
PATH_INFO directly. If you use the slash notation, you're
"docs/doc" representation will collide with your "docs/doc"
directory.
Am I missing something?
--
Ryan Tomayko
http://tomayko.com/
At Mon, 22 Jan 2007 15:20:14 +0000, Bill de hOra <bill@...> wrote: > > > I'd like to publish some content in different formats. I think I'd like > to use URLs as follows: > > http://example.com/docs/doc > > for "the resource"; and atom, csv, rdf, zip: > > http://example.com/docs/doc/atom > http://example.com/docs/doc/csv > http://example.com/docs/doc/rdf > > appended for "formats". Probably I'll use rel tags/URI templates and a > howto page to document that you can get at representations in multiple > forms. […] Hi Bill, Is there something wrong with http://example.com/docs/doc as the primary URL and: http://example.com/docs/doc.atom http://example.com/docs/doc.csv http://example.com/docs/doc.rdf as the ones which ignore the Accept header, which would probably make more sense to most users? Also, I thought I’d just read a recommendation on how to do this, but I can’t recall where. I’m trying to find it. IIRC, it suggests pretty using the URLs as above with multiple formats served on the primary URL based on the Accept header. It might have also involved 302s. best, Erik Hetzner
> >Also, I thought I’d just read a recommendation on how to do this, but >I can’t recall where. I’m trying to find it. IIRC, it suggests pretty >using the URLs as above with multiple formats served on the primary >URL based on the Accept header. It might have also involved 302s. > You may be thinking of this: http://www.w3.org/2001/tag/doc/alternatives-discovery.html -Eric
> * `curl -O <URL>` (or Right Clicky -> Save As) does the Right > Thing. Sold. cheers Bill
Erik Hetzner wrote: > Hi Bill, > > Is there something wrong with > > http://example.com/docs/doc > > as the primary URL and: > > http://example.com/docs/doc.atom > http://example.com/docs/doc.csv > http://example.com/docs/doc.rdf > > as the ones which ignore the Accept header, which would probably make > more sense to most users? Nope (and this is for Ryan too). I just happened to use a '/' instead of '.'. For no good reason, I blame Plone, which has been today's web system of choice. > Also, I thought I’d just read a recommendation on how to do this, but > I can’t recall where. I’m trying to find it. IIRC, it suggests pretty > using the URLs as above with multiple formats served on the primary > URL based on the Accept header. It might have also involved 302s. I guess the issue here comes down to URI proliferation for a single resource. Mike Schinkel was asking about this recently; I think this is a case where a) multiple URIs one resource makes some kind of sense, b) it's becoming an idiomatic design pattern for web apps anyway. There is the accept/content-* machinery but no-one (statistically speaking) uses that stuff to obtain "the pdf" or whatever. cheers Bill
Hi, I encountered a use case, which I want to get your opinion on. Say that in my application I have a search form (resource that accepts search queries). Now, when someone clicks one of the results, I'd like to put into the page 'next' and 'prev' links (similar to what you have in bugzilla for example). Now, one (bad) way of doing this, is keeping a state in the server that says the user is doing a search, and render the page with the links according to that state. This is bad because the user my get to the page in other ways (bookmarks, search something else and go back etc.), which will mean the 'back' and 'prev' links may be corrupted. Another way is to make the links to the search results contain the fact that they originated from a search, with all search query data (maybe encoded in some way, e.g., serialized). This means two things: (a) the url to the resource contains data outside of the scope of the resource, (b) the returned representation contains information that is not part of the resource. Is it OK to say that the search result contains urls which are not the resources themselves, but "meta resources" that wrap the real resource and add information?: resource is http://example.org/resources/23 search is http://example.org/searches/43243otuou # '43243otuou' is an encoding of the search parameters (instead of query string) search result is: http://example.org/searces/43243otuou/1 # '1' says this is the first result the latter returns something like: <next-result url="http://example.org/searces/43243otuou/2"/> <resource> # resource representation </resource> also, when returning HTML, 'prev' and 'next' are embedded in the resource representation (somewhere between giving its name and other information). is it OK? Thanks, Ittay -- =================================== Ittay Dror, Chief architect, R&D, Qlusters Inc. ittayd@... +972-3-6081994 Fax: +972-3-6081841 www.openqrm.org - Data Center Provisioning
[ Attachment content not displayed ]
[ Attachment content not displayed ]
On 1/22/07, Nikunj Mehta <nrmehtais@...> wrote:
>
> I was going through some mail archives and read that [1] PUT does not beget an ETag in the response. I quote from this reference below:
>
>
> > One problem is that the behavior of returning ETag in response to a PUT request isn't specified by HTTP
> >
>
> However a reading of 10.2.2 indicates otherwise. Since the reference I am quoting is rather recent, I am wondering what is missing? I would imagine that the ETag would be used with PUT in order to provide a desired level of concurrency control.
>
> I understand this is not an RFC 2616 discussion group, but this question is aimed at understanding how response headers are restricted by the operations requested of a resource.
I know it is rather chatty, but this is the way that interpret the
HTTP spec to obtain an ETag:
-->
PUT /foo
{entity}
<--
201 Created
Date: Mon, 22 Jan 2007 22:26:08 GMT
-->
HEAD /foo
If-Unmodified-Since: Mon, 22 Jan 2007 22:26:08 GMT
<--
200 OK
ETag: "abc123"
Hope that helps, Alan Dean
On 1/22/07, Nikunj Mehta <nrmehtais@...> wrote: > I am trying to understand how strong entity tags are being generated for use with HTTP. Can anyone point to suitable resources or suggest options? See my delicious links: http://del.icio.us/alan.dean/etag I use a hash of the entity for my etags. Alan Dean
--- In rest-discuss@yahoogroups.com, "Alan Dean" <alan.dean@...> wrote:
>
> On 1/22/07, Nikunj Mehta <nrmehtais@...> wrote:
> >
> > I was going through some mail archives and read that [1] PUT does
not beget an ETag in the response. I quote from this reference below:
> >
> >
> > > One problem is that the behavior of returning ETag in response
to a PUT request isn't specified by HTTP
> > >
> I know it is rather chatty, but this is the way that interpret the
> HTTP spec to obtain an ETag:
>
> -->
> PUT /foo
>
> {entity}
>
> <--
> 201 Created
> Date: Mon, 22 Jan 2007 22:26:08 GMT
>
> -->
> HEAD /foo
> If-Unmodified-Since: Mon, 22 Jan 2007 22:26:08 GMT
>
> <--
> 200 OK
> ETag: "abc123"
This is what I feared. Problem is that I lose track of the ETag at the
end of the first request. The RFC says that a server MAY do so. Do
most servers not return an ETag with the 201 Created response? If I am
writing a new server, would clients ignore the ETag returned for the
PUT response?
On 1/22/07, Ittay Dror <ittayd@...> wrote: > Hi, > > I encountered a use case, which I want to get your opinion on. > > Say that in my application I have a search form (resource that accepts search queries). Now, when someone clicks one of the results, I'd like to put into the page 'next' and 'prev' links (similar to what you have in bugzilla for example). > > Now, one (bad) way of doing this, is keeping a state in the server that says the user is doing a search, and render the page with the links according to that state. This is bad because the user my get to the page in other ways (bookmarks, search something else and go back etc.), which will mean the 'back' and 'prev' links may be corrupted. > > Another way is to make the links to the search results contain the fact that they originated from a search, with all search query data (maybe encoded in some way, e.g., serialized). > > This means two things: (a) the url to the resource contains data outside of the scope of the resource, (b) the returned representation contains information that is not part of the resource. > > Is it OK to say that the search result contains urls which are not the resources themselves, but "meta resources" that wrap the real resource and add information?: I'm having a hard time seeing why you would not just use either of the following for a simple term search, such as 'rest': http://example.com?q=rest ... google style or http://example.com/search/rest ... delicious style for pagination: http://example.com?q=rest&page=1 or http://example.com/search/rest/page/1 or http://example.com/search/rest?page=1 Alan Dean
On 1/22/07, nrmehta <nrmehtais@...> wrote: > This is what I feared. Problem is that I lose track of the ETag at the > end of the first request. The RFC says that a server MAY do so. Do > most servers not return an ETag with the 201 Created response? If I am > writing a new server, would clients ignore the ETag returned for the > PUT response? Yeah, I've been around this particular house too ;-) As it happens, I write in .net and the runtime sinks ETag headers if they are set because a 201 response is not cacheable. As to the clients, I haven't the foggiest - browsers don't support PUT and pretty much all the other clients are hand-cranked / proprietory. I'm guessing that any MS software will ignore it on a PUT (just a guess from the server-side .net behaviour). Maybe someone who works in Java / Perl / PHP can advise what the default behaviour is in those runtimes. I have to admit it seems odd to send an ETag along with a response marked no-cache, but it is permitted by the spec if you wish to do so. Alan Dean
Bill de hOra wrote: > > > Erik Hetzner wrote: > > > Also, I thought I’d just read a recommendation on how to do this, but > > I can’t recall where. I’m trying to find it. IIRC, it suggests pretty > > using the URLs as above with multiple formats served on the primary > > URL based on the Accept header. It might have also involved 302s. > > I guess the issue here comes down to URI proliferation for a single > resource. Mike Schinkel was asking about this recently; I think this is > a case where a) multiple URIs one resource makes some kind of sense, b) > it's becoming an idiomatic design pattern for web apps anyway. There is > the accept/content-* machinery but no-one (statistically speaking) uses > that stuff to obtain "the pdf" or whatever. "print this page" falls under the idiom as well. Any usability people on the list? cheers Bill
At Mon, 22 Jan 2007 20:59:56 +0000, Bill de hOra wrote: > I guess the issue here comes down to URI proliferation for a single > resource. Mike Schinkel was asking about this recently; I think this is > a case where a) multiple URIs one resource makes some kind of sense, b) > it's becoming an idiomatic design pattern for web apps anyway. There is > the accept/content-* machinery but no-one (statistically speaking) uses > that stuff to obtain "the pdf" or whatever. I like content negotiation, in theory & for applications, but if I know that <http://example.org/doc> has a pdf representation & I can’t get it in my browser by appending ‘.pdf’ I’m going to be frustrated. And URI proliferation doesn’t seem to be that bad. From <http://www.w3.org/2001/tag/doc/alternatives-discovery.html#id2262384> > Principal Conclusions: > > * URIs are cheap. Create them as needed, publish them to the Web, > and ensure that they are appropriately linked in to the rest of the > Web. Thus, each representation of interest should get it’s own URI > (become a specific resource) and there should be one additional URI > representing the generic resource. best, Erik Hetzner
On Jan 22, 2007, at 2:59 PM, Alan Dean wrote: > As it happens, I write in .net and the runtime sinks ETag headers if > they are set because a 201 response is not cacheable. That's dumb. Which version of the runtime? ... > I have to admit it seems odd to send an ETag along with a response > marked no-cache, but it is permitted by the spec if you wish to do so. ETag is not an entity-header. ETag provides information about the resource, not the current message, so it is orthogonal to cache control. ....Roy
Hi Roy, On Jan 22, 2007, at 3:59 PM, Roy T. Fielding wrote: > On Jan 22, 2007, at 2:59 PM, Alan Dean wrote: >> I have to admit it seems odd to send an ETag along with a response >> marked no-cache, but it is permitted by the spec if you wish to do >> so. > > ETag is not an entity-header. ETag provides information about the > resource, not the current message, so it is orthogonal to cache > control. > Hmm. How else might an ETag be used other than determining if a cached entity is still current? Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
On Jan 22, 2007, at 4:11 PM, Bill Venners wrote: > Hi Roy, > > On Jan 22, 2007, at 3:59 PM, Roy T. Fielding wrote: > >> On Jan 22, 2007, at 2:59 PM, Alan Dean wrote: >>> I have to admit it seems odd to send an ETag along with a response >>> marked no-cache, but it is permitted by the spec if you wish to >>> do so. >> >> ETag is not an entity-header. ETag provides information about the >> resource, not the current message, so it is orthogonal to cache >> control. >> > Hmm. How else might an ETag be used other than determining if a > cached entity is still current? The ETag *header field* is a response header. The entity-tag value contained within that header field may be used in *later* requests for cache-related conditionals, like If-None-Match and If-Match. The entity tag is actually metadata about the internal resource mapping on the server -- it does not necessarily reflect anything about the content (other than the premise that the value must change whenever the content is changed). That is why it doesn't matter which status code is used or what content is supplied in the response message, since the Etag is referring to the mapping operation rather than the content of *this* message. Hence, the ETag provided in a 201 response must be the entity-tag of the mapping result after completing the resource-changing operation. It is a tricky thing to understand, and the spec does a poor job of explaining it. ....Roy
On Mon, 22 Jan 2007, Bill de hOra wrote:
> I'd like to publish some content in different formats. I think I'd like
> to use URLs as follows:
>
> http://example.com/docs/doc
>
> for "the resource"; and atom, csv, rdf, zip:
>
> http://example.com/docs/doc/atom
> http://example.com/docs/doc/csv
> http://example.com/docs/doc/rdf
I agree with most other people: If you're not going to use content
negotiation and you expect people not computers to be using these URLs,
use doc.atom, doc.csv, etc.
> appended for "formats". Probably I'll use rel tags/URI templates and a
> howto page to document that you can get at representations in multiple
> forms.
Can you elaborate on this a bit or point me to a resource that does? I
assume you mean that the HTML representation will inlude <link>s to
alternate stuff? I wish browsers exposed that stuff a bit more.
> I know there's conneg at the HTTP level, but it seems to have miserably,
> utterly, and completely failed* on the web.
It's failed on the people web, but seems to work pretty well for little
client to little (or big) servers. AJAX requests for JSON
representations and the like, command line tools that pull editable
representations to vim, whatever.
When we built the Socialtext REST (sic?) API [1] we debated for quite a
while about content-negotation. Initially I wanted the "user's web front
end" to use the same URLs as "api's front end" and figured something
like what you describe would be useful. Compromise eventually settled us
on an api front end that gets used by the interface and we declared
(very simple) content negotation the way to go, as code was always
going to be there to set the Accept header.
This of course turned out to be only sort of true. Code is written by
people who like to go exploring with their browsers. Library code is
sometimes written by people who don't know about the Accept header.
So now there is an ?accept=<type> parameter accepted by most
representations. I wonder if it would also be useful to implement
.<filetype extension>?
[1]
https://www.socialtext.net/st-rest-docs/index.cgi?socialtext_rest_documentation
--
Chris Dent http://burningchrome.com/~cdent/mt
[...]
[ Attachment content not displayed ]
On Jan 22, 2007, at 4:34 PM, Roy T. Fielding wrote: > On Jan 22, 2007, at 4:11 PM, Bill Venners wrote: > >> Hi Roy, >> >> On Jan 22, 2007, at 3:59 PM, Roy T. Fielding wrote: >> >>> On Jan 22, 2007, at 2:59 PM, Alan Dean wrote: >>>> I have to admit it seems odd to send an ETag along with a response >>>> marked no-cache, but it is permitted by the spec if you wish to >>>> do so. >>> >>> ETag is not an entity-header. ETag provides information about the >>> resource, not the current message, so it is orthogonal to cache >>> control. >>> >> Hmm. How else might an ETag be used other than determining if a >> cached entity is still current? > > The ETag *header field* is a response header. The entity-tag value > contained within that header field may be used in *later* requests > for cache-related conditionals, like If-None-Match and If-Match. > > The entity tag is actually metadata about the internal resource > mapping on the server -- it does not necessarily reflect anything > about the content (other than the premise that the value must > change whenever the content is changed). That is why it doesn't > matter which status code is used or what content is supplied in > the response message, since the Etag is referring to the mapping > operation rather than the content of *this* message. Hence, the > ETag provided in a 201 response must be the entity-tag of the > mapping result after completing the resource-changing operation. > > It is a tricky thing to understand, and the spec does a poor job > of explaining it. > I think you clarified it. I believe I understand your point about 201 after PUT, but could you clarify what you mean by "internal resource mapping?" Do you mean the way a server will map a URI to a representation? But if content negotiation is taking place, the server would need the request headers too to determine a representation, not just a URI. What does "internal resource mapping" mean? What's being mapped to what? Thanks. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Ittay Dror wrote: > Say that in my application I have a search form (resource that accepts search queries). Now, when someone clicks one of the results, I'd like to put into the page 'next' and 'prev' links (similar to what you have in bugzilla for example). > > Now, one (bad) way of doing this, is keeping a state in the server that says the user is doing a search, and render the page with the links according to that state. This is bad because the user my get to the page in other ways (bookmarks, search something else and go back etc.), which will mean the 'back' and 'prev' links may be corrupted. > > Another way is to make the links to the search results contain the fact that they originated from a search, with all search query data (maybe encoded in some way, e.g., serialized). Yes. The latter is the correct way, at least if you want this system to scale. There's no need to encode the query data (aprt from URL encoding), just specify it as query parameters. > This means two things: (a) the url to the resource contains data outside of the scope of the resource, (b) the returned representation contains information that is not part of the resource. Not really, no. When you say "resource", you seem to be implying that there is a one-to-one mapping between resources and results. However, within this system you're designing a resource also contains information about the context it was located in, so it's not the same as a result. The resource is simply the thing that is pointed to by the URL, including all this query context data. > Is it OK to say that the search result contains urls which are not the resources themselves, but "meta resources" that wrap the real resource and add information?: Not quite, see above. The search result will return both context data and results. These can be combined to form URLs that reference resources that combine both the context data and the result. > also, when returning HTML, 'prev' and 'next' are embedded in the resource representation (somewhere between giving its name and other information). is it OK? Yes. -- Chris Burdess
On 1/22/07, Roy T. Fielding <fielding@...> wrote: > On Jan 22, 2007, at 2:59 PM, Alan Dean wrote: > > As it happens, I write in .net and the runtime sinks ETag headers if > > they are set because a 201 response is not cacheable. > > That's dumb. Which version of the runtime? [snip] > ETag is not an entity-header. ETag provides information about the > resource, not the current message, so it is orthogonal to cache > control. > > ....Roy I just did a double-check to make sure that I'm not talking out of my hat, and yes - the ETag is sunk if Cache-Control is set to no-cache. I'm using frameowrk v2.0 - don't know what the behaviour of previous versions is. Alan Dean
On 1/23/07, Roy T. Fielding <fielding@...> wrote: > The ETag *header field* is a response header. The entity-tag value > contained within that header field may be used in *later* requests > for cache-related conditionals, like If-None-Match and If-Match. > > The entity tag is actually metadata about the internal resource > mapping on the server -- it does not necessarily reflect anything > about the content (other than the premise that the value must > change whenever the content is changed). That is why it doesn't > matter which status code is used or what content is supplied in > the response message, since the Etag is referring to the mapping > operation rather than the content of *this* message. Hence, the > ETag provided in a 201 response must be the entity-tag of the > mapping result after completing the resource-changing operation. > > It is a tricky thing to understand, and the spec does a poor job > of explaining it. > > ....Roy Following the logic that ETag should be provided with a 201 Created, should one be provided with a 304 Not Modified too? What about empty ETags? 204 No Content ETag: "" or: 404 Not Found ETag: "" Alan Dean
Alan Dean schrieb:
>
>
> On 1/23/07, Roy T. Fielding <fielding@gbiv. com
> <mailto:fielding%40gbiv.com>> wrote:
> > The ETag *header field* is a response header. The entity-tag value
> > contained within that header field may be used in *later* requests
> > for cache-related conditionals, like If-None-Match and If-Match.
> >
> > The entity tag is actually metadata about the internal resource
> > mapping on the server -- it does not necessarily reflect anything
> > about the content (other than the premise that the value must
> > change whenever the content is changed). That is why it doesn't
> > matter which status code is used or what content is supplied in
> > the response message, since the Etag is referring to the mapping
> > operation rather than the content of *this* message. Hence, the
> > ETag provided in a 201 response must be the entity-tag of the
> > mapping result after completing the resource-changing operation.
> >
> > It is a tricky thing to understand, and the spec does a poor job
> > of explaining it.
> >
> > ....Roy
>
> Following the logic that ETag should be provided with a 201 Created,
> should one be provided with a 304 Not Modified too?
Yes. <http://greenbytes.de/tech/webdav/rfc2616.html#status.304>:
"The response MUST include the following header fields:
* Date, unless its omission is required by Section 14.18.1
If a clockless origin server obeys these rules, and proxies and clients
add their own Date to any response received without one (as already
specified by [RFC 2068], section 14.19), caches will operate correctly.
* ETag and/or Content-Location, if the header would have been sent
in a 200 response to the same request
* Expires, Cache-Control, and/or Vary, if the field-value might
differ from that sent in any previous response for the same variant"
> What about empty ETags?
>
> 204 No Content
> ETag: ""
>
> or:
>
> 404 Not Found
> ETag: ""
In this case it's not empty, but consists of two characters. That being
said, I think it's legal but I really wouldn't be surprised if some
recipients have trouble handling it.
Anyway; it seems to me that a better place for this discussion would be
the HTTP working group's mailing list. In particular, because the topic
of ETags upon PUT has been discussed over there as well.
Best regards, Julian
Hi, Something I've been pondering the last few days is basically if there is a preference to POST new items as ; GET http://example.com/items/1234 , returns 404 POST http://example.com/items/1234 (there is no list concept; only resources) or GET http://example.com/items/1234 , returns 404 POST http://example.com/items with 1234 as parameter value , returns above URL (/items is the concept of a list which you post new items to, or work with a list) I can see the first option being nice in terms of PUT and DELETE as well, but some times you are in situations where you don't know the /1234 identifier upfront (for session hashes, for example) Any thoughts on the list? Is the mix of the two perhaps a better way, as in ; POST / GET http://example.com/item/1234 , returns 404 (note no plural) POST / GET http://example.com/items (plural for 'list' operations) Regards, Alexander -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
Hi Alexander, my take on this: if you want to tell your clients what resource is a collection they can POST to, use Atom Publishing Protocol <service> documents[1]. HTH, Jan [1] http://www.ietf.org/internet-drafts/draft-ietf-atompub-protocol-12.txt On Tuesday, January 23, 2007, at 12:21PM, "Alexander Johannesen" <alexander.johannesen@...> wrote: >Hi, > >Something I've been pondering the last few days is basically if there >is a preference to POST new items as ; > > GET http://example.com/items/1234 , returns 404 > POST http://example.com/items/1234 > (there is no list concept; only resources) > >or > > GET http://example.com/items/1234 , returns 404 > POST http://example.com/items with 1234 as parameter value , >returns above URL > (/items is the concept of a list which you post new items to, or >work with a list) > >I can see the first option being nice in terms of PUT and DELETE as >well, but some times you are in situations where you don't know the >/1234 identifier upfront (for session hashes, for example) > >Any thoughts on the list? Is the mix of the two perhaps a better way, as in ; > > POST / GET http://example.com/item/1234 , returns 404 (note no plural) > POST / GET http://example.com/items (plural for 'list' operations) > > >Regards, > >Alexander >-- > --------------------------------------------------------------------------- > Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps >------------------------------------------ http://shelter.nu/blog/ -------- > > > >Yahoo! Groups Links > > > > >
Alexander Johannesen wrote: > GET http://example.com/items/1234 , returns 404 > POST http://example.com/items/1234 > (there is no list concept; only resources) That makes little sense if http://example.com/items/1234 doesn't identify a resource (hence 404) then it doesn't identify a resource so only PUT makes sense (or DELETE to maintain its idempotency - two DELETEs should work). If http://example.com/items/1234 does identify a resource, but you can't GET it, then it should return 405. > GET http://example.com/items/1234 , returns 404 > POST http://example.com/items with 1234 as parameter value , > returns above URL > (/items is the concept of a list which you post new items to, or > work with a list) > > I can see the first option being nice in terms of PUT and DELETE as > well, but some times you are in situations where you don't know the > /1234 identifier upfront (for session hashes, for example) Agreed. Not having the parameter value, but having the 1234 determined by the server and then the client informed by the 303 response works well. > Any thoughts on the list? Is the mix of the two perhaps a better way, as in ; > > POST / GET http://example.com/item/1234 , returns 404 (note no plural) > POST / GET http://example.com/items (plural for 'list' operations) Either is RESTful, since no particular relationship between http://example.com/item/1234 and http://example.com/item is entailed. However it can certainly be guessed at, so it seems better URI design to have one "up the path" from the other, though using the plural in both cases may make more sense http://example.com/items identifies the items and http://example.com/items/1234 identifies that one of the items numbered 1234.
On 1/23/07, Julian Reschke <julian.reschke@...> wrote: > In this case it's not empty, but consists of two characters. That being > said, I think it's legal but I really wouldn't be surprised if some > recipients have trouble handling it. I was under the impression that all ETag values were quoted, and so an empty ETag in the form "" was the correct ETag for a missing resource. Can't recall were I picked this idea up from, but there you are. This interpretation allows you to specify that an action should only be carried out if the resource is missing, e.g. --> PUT /foo If-Match: "" Regards, Alan Dean
On 1/23/07, Jan Algermissen <algermissen1971@...> wrote: > my take on this: if you want to tell your clients what resource is a collection > they can POST to, use Atom Publishing Protocol <service> documents[1]. I've tried to wrap my head around APP, but can't seem to do it, possibly from the lack of examples. The standard itself uses a lot of abstract speak without actual examples of normal transactions using APP. Unless you've got some good pointers to examples I'm not sure I can rely on it just quite yet, it's a bit fresh and unproven in my eyes although I'm certainly interested and hear good things about it. Regards, Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
On 1/23/07, Jon Hanna <jon@...> wrote:
> Agreed. Not having the parameter value, but having the 1234 determined
> by the server and then the client informed by the 303 response works well.
I was entailing a POST to /items would create a 201 ('created') with
an ETag of either the redirect URL or the resource identifier
(although none feels "best" to me). If I do a 303 on the /items URL
then that's saying the /items resource has moved permanently
elsewhere?
> Either is RESTful, since no particular relationship between
> http://example.com/item/1234 and http://example.com/item is entailed.
> However it can certainly be guessed at, so it seems better URI design to
> have one "up the path" from the other, though using the plural in both
> cases may make more sense http://example.com/items identifies the items
> and http://example.com/items/1234 identifies that one of the items
> numbered 1234.
Agreed, and I'm probably heading that way. The reason this question
came up was that applications might be trying to GET a /items/1234 to
check if a session resource is still available with 404 for not
created, 410 for lost/timed-out, or 200 if it's still there. If a 410
is given, they could POST to the same URL again to put it back (200).
I guess a POST is ok for 410 but not for 404 in this case. Hmm, maybe
I should try to avoid that, and just allow another POST to the /items
URL and use the session hash (1234) as a parameter to override the
creation of a new and instead use the one I know about. Thoughts?
Alex
--
---------------------------------------------------------------------------
Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps
------------------------------------------ http://shelter.nu/blog/ --------
Alan Dean schrieb: > > > On 1/23/07, Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> wrote: > > In this case it's not empty, but consists of two characters. That being > > said, I think it's legal but I really wouldn't be surprised if some > > recipients have trouble handling it. > > I was under the impression that all ETag values were quoted, and so an > empty ETag in the form "" was the correct ETag for a missing resource. > Can't recall were I picked this idea up from, but there you are. This > interpretation allows you to specify that an action should only be > carried out if the resource is missing, e.g. > > --> > PUT /foo > If-Match: "" Nope. But you could use If-None-Match: "*" (see <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.14.26.p.2>) Best regards, Julian
Alexander Johannesen wrote:
> On 1/23/07, Jon Hanna <jon@...> wrote:
>
>> Agreed. Not having the parameter value, but having the 1234 determined
>> by the server and then the client informed by the 303 response works well.
>
> I was entailing a POST to /items would create a 201 ('created') with
> an ETag of either the redirect URL or the resource identifier
> (although none feels "best" to me). If I do a 303 on the /items URL
> then that's saying the /items resource has moved permanently
> elsewhere?
201 is grand but I'm unsure as to whether existing UAs then move to that
resource or not. 303 doesn't imply /items has moved, 303 is "See Other"
and means that the POST to /items has done something and the URI in the
header shows where something relating to what was done is GETtable.
> I guess a POST is ok for 410 but not for 404 in this case. Hmm, maybe
> I should try to avoid that, and just allow another POST to the /items
> URL and use the session hash (1234) as a parameter to override the
> creation of a new and instead use the one I know about. Thoughts?
POST acts on a resource. 410 indicates the resource is gone. How can you
act on something that has gone?
It'll probably work, but it smells bad to me.
Hi Taylor, On Jan 22, 2007, at 6:41 PM, Taylor Parsons wrote: > REST url format for returning serialized data in formats other then > default. > > For example does any feel strongly about either of these two formats? David Hansson certainly does. :-) The former is now the default in Rails 1.2. http://weblog.rubyonrails.org/2007/1/19/rails-1-2-rest-admiration- http-lovefest-and-utf-8-celebrations That may not mean he's "right", but I tend to trust his instincts. Plus, Rails is (I think) the first major web app framework to wholeheartedly switch to a RESTful approach, so I think it provides a useful precedent. That said, I can imagine cases where the latter would feel more natural or easy to code; but if you're unsure, I'd go with the former (and Rails). -enp > > First approach parse the file file format as the switch for decide > how the caller wants the data formatted. > /baseURL/method/doc.rss > /baseURL/method/doc.json > /baseURL/method/doc.xml > /baseURL/method/doc.html > > Second approach the default return format would be html, and we add > in a feeds sub-dir and a file format to use as our switch. > /baseURL/feeds/rss/method/ > /baseURL/feeds/json/method/ > /baseURL/feeds/xml/method/
A great piece of writing on the topic by Sean McGrath: http://www.itworld.com/Tech/2327/nlsebiz070123/pfindex.html Entertaining too! Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
With all of the discussions about the handling of ETags, If-Match, etc I thought that I would put up the flowchart we are using to define our test cases. See http://www.flickr.com/photos/alan-dean/367132415/ If you notice any errors or misunderstanding of the HTTP spec in the diagram, please let me know :-) Regards, Alan Dean
Bill de hOra wrote: > > > Erik Hetzner wrote: > > > Hi Bill, > > > > Is there something wrong with > > > > http://example.com/docs/doc <http://example.com/docs/doc> > > > > as the primary URL and: > > > > http://example.com/docs/doc.atom <http://example.com/docs/doc.atom> > > http://example.com/docs/doc.csv <http://example.com/docs/doc.csv> > > http://example.com/docs/doc.rdf <http://example.com/docs/doc.rdf> > > > > as the ones which ignore the Accept header, which would probably make > > more sense to most users? > > Nope (and this is for Ryan too). I just happened to use a '/' instead of > '.'. For no good reason, I blame Plone, which has been today's web > system of choice. As an aside; it seems you can't do this (append an extension) in Plone, or more properly Zope2, because of the way object traversal there works with the URI path. You can do this instead: http://example.com/docs/doc/entry.atom oh well. cheers Bill
David Powell wrote: > A lot of the time when people publish human and machine-readable > formats, conneg doesn't seem appropriate, because the formats are > really not substitutable for each other. Eg, imagine the surprise > that someone gets when they try to run XSLT over an XML representation > but get a zip file because their XSLT processor isn't setting the > Accept headers in exactly the way that the server expects. That's a really serious bug in the XSLT processor then. Any XSLT processor that says it prefers to get zipped files, and then doesn't handle them; well that's almost (but not quite) too dumb to be believed. If such a thing exists, it's only because servers aren't respecting conneg. Fail fast is a feature, not a bug. Working around broken software is an antipattern we have to do away with. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 1/22/07, Roy T. Fielding <fielding@...> wrote: > On Jan 22, 2007, at 2:59 PM, Alan Dean wrote: > > As it happens, I write in .net and the runtime sinks ETag headers if > > they are set because a 201 response is not cacheable. > > That's dumb. Which version of the runtime? > > ... > > I have to admit it seems odd to send an ETag along with a response > > marked no-cache, but it is permitted by the spec if you wish to do so. > > ETag is not an entity-header. ETag provides information about the > resource, not the current message, so it is orthogonal to cache > control. > > ....Roy > For information: I have raised this issue with MS support - this is the feedback: "I received your mail yesterday. I understand you want to tell me that our products should follow RFC document. But unfortunately, in most cases, we take these documents as a reference instead of following it completely. Anyway, I agree with you that Cache-Control tag is not directly related to ETag. I've discuss the issue with a senior engineer in our team. According to the discussion, we decide to feedback your issue to our product team. However, the request may takes a long time for the product team to handle as they have to decide if there is any side effect for this modification. Furthermore, I can't determine whether they will fix it or not. In any stage of the process of the fix request, the product team may have authority to reject this request. This is completely out of my control. Sorry for that. I'll keep you update about the latest status of the request. Your patience will be greatly appreciated!" So, at least they acknowledge the issue (although with no promise to fix). Alan Dean
On 1/24/07, Alan Dean <alan.dean@...> wrote: > See http://www.flickr.com/photos/alan-dean/367132415/ Thanks for that; a fantastic resource. I've been using it the last few days to implement a simple REST utility for our internal SOA, and this was just what I needed. Is there a REST FAQ / Wiki somewhere where this chart is mandatory? Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
On 1/25/07, Alexander Johannesen <alexander.johannesen@...> wrote: > On 1/24/07, Alan Dean <alan.dean@...> wrote: > > See http://www.flickr.com/photos/alan-dean/367132415/ > > Thanks for that; a fantastic resource. I've been using it the last few > days to implement a simple REST utility for our internal SOA, and this > was just what I needed. > > Is there a REST FAQ / Wiki somewhere where this chart is mandatory? This was put together by me directly from the HTTP spec, it wasn't taken from an FAQ or Wiki but I am not asserting any restrictions on use so if there is a FAQ or Wiki that wants to republish it - feel free :-) Alan Dean
On 1/25/07, Alan Dean <alan.dean@...> wrote: > ... I am not asserting any restrictions on use so if there is a FAQ or Wiki that wants to republish it - feel free Of course, it would be nice if I was quoted as creator :-)
Alan Dean schrieb: > > > With all of the discussions about the handling of ETags, If-Match, etc > I thought that I would put up the flowchart we are using to define our > test cases. > > See http://www.flickr. com/photos/ alan-dean/ 367132415/ > <http://www.flickr.com/photos/alan-dean/367132415/> > > If you notice any errors or misunderstanding of the HTTP spec in the > diagram, please let me know :-) > > Regards, > Alan Dean I think one thing that is missing is a more generic handling of method names (you may have that somewhere else in your code, but it's missing from the diagram). For instance, MKCOL to an unmapped URI should either succeed (201) or fail with 405 (method not allowed). Best regards, Julian
On 1/25/07, Julian Reschke <julian.reschke@...> wrote: > > I think one thing that is missing is a more generic handling of method > names (you may have that somewhere else in your code, but it's missing > from the diagram). > > For instance, MKCOL to an unmapped URI should either succeed (201) or > fail with 405 (method not allowed). > > Best regards, Julian > Yes - I was only trying to model the RFC2616 HTTP 1.1 Spec, not WebDAV. http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html Regards, Alan
Alan Dean schrieb: > On 1/25/07, Julian Reschke <julian.reschke@...> wrote: >> >> I think one thing that is missing is a more generic handling of method >> names (you may have that somewhere else in your code, but it's missing >> from the diagram). >> >> For instance, MKCOL to an unmapped URI should either succeed (201) or >> fail with 405 (method not allowed). >> >> Best regards, Julian >> > > Yes - I was only trying to model the RFC2616 HTTP 1.1 Spec, not WebDAV. > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html ...but that still requires returning the right status code when you don't know the method name, right? Best regards, Julian
On 1/25/07, Julian Reschke <julian.reschke@...> wrote: > > ...but that still requires returning the right status code when you > don't know the method name, right? If the file / entity exists and the verb is not in (PUT, POST, DELETE, GET, HEAD) then the flow dictates that a 405 Method Not Allowed is returned. Actually in my code, the method names are whitelisted and the 405 is returned for any method name not on the whitelist (which, in my case, would include MKCOL). To be honest the flow described by the diagram is focused on the resolution of the various headers, rather than the methods allowed. Maybe there is a poential project for someone interested in WebDAV to do the same kind of thing including other methods - not me though ;-) Regards, Alan
Alan Dean schrieb: > > > On 1/25/07, Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> wrote: > > > > ...but that still requires returning the right status code when you > > don't know the method name, right? > > If the file / entity exists and the verb is not in (PUT, POST, DELETE, > GET, HEAD) then the flow dictates that a 405 Method Not Allowed is > returned. > > Actually in my code, the method names are whitelisted and the 405 is > returned for any method name not on the whitelist (which, in my case, > would include MKCOL). > > To be honest the flow described by the diagram is focused on the > resolution of the various headers, rather than the methods allowed. > > Maybe there is a poential project for someone interested in WebDAV to > do the same kind of thing including other methods - not me though ;-) Understood. I just wanted to clarify that the flow graph is not complete with respect to this (as unsupported methods to an unmapped resource would return 404, not 405). Best regards, Julian
On Sun, 2007-01-21 at 11:57 -0500, Elliotte Harold wrote: > Is it reasonable to use POST for a safe operation that transmits > confidential data in the query string? No, it is not reasonable. You lose your caching potential, which is one thing... but the main practical thing is that the browser will need a button every time it navigates from one page to another. Surely you won't just have one page with the secret credit card as part of the url. There are bound to be dozens of urls that contain that information if only to look through historical records. Using POST with currently technology limits your ability to use simple recognisable hyperlinks. On a pure architectural level secrets don't really matter too much. You can do what you want with them. If they are secret they aren't known by many people and are less likely to be frequently fetched than other kinds of data. They don't beneifit as much as other resources from caching and other architectural niceties. On the other hand, I hold POST to mean "append the data I provide" which is clearly not what is meant in this case. Using methods inconsistently is recipe for mismatch and confusion everywhere. So... no. Don't use POST. It will cause you headaches, and is not necessary at all. > So far I think the two or three folks who've actually addressed the > question square on have come down in favor of using POST despite the > safety of the operation. I think you have missed the discussion about safe hashes for secret data. If the data is secret it isn't just shoulder surfing you have to worry about. Using that secret data in an identifier anywhere in your application is a no no. Even if you trust your employees, you have a duty of care to keep what your customers consider secrets secret as much as possible. You should have a table that maps this secret data to a non-secret hash. An examples: Credit card to non-secret hash table 1234 4567 78 -> 001 4567 1234 90 -> 002 Now instead of visiting <http://example.com/12345678> your user visits <http://example.com/001>, which is the URL you told him to visit. You might have done this by sending him the URL in the mail. You might have done it by asking him to fill in a form with his credit card number in it that returned or redirected to this url. You might have simply hyperlinked to this url from the user's personalised main page. The point is that whenever you put secret data in identifiers you guarantee that everyone in the company will be able to access the information. Don't do that. Use a hash instead. Then you at least have some hope of being able to expose the secrets to as small a subset of your employee community as possible. Naturally, this also solves the shoulder surfing problem... so long as the user doesn't type their credit card number in to that form while someone is watching. Benjamin.
On Tue, 2007-01-23 at 23:16 +1100, Alexander Johannesen wrote: > On 1/23/07, Jon Hanna <jon@...> wrote: > > Either is RESTful, since no particular relationship between > > http://example.com/item/1234 and http://example.com/item is > entailed. > > However it can certainly be guessed at, so it seems better URI > design to > > have one "up the path" from the other, though using the plural in > both > > cases may make more sense http://example.com/items identifies the > items > > and http://example.com/items/1234 identifies that one of the items > > numbered 1234. > Agreed, and I'm probably heading that way. The reason this question > came up was that applications might be trying to GET a /items/1234 to > check if a session resource is still available with 404 for not > created, 410 for lost/timed-out, or 200 if it's still there. If a 410 > is given, they could POST to the same URL again to put it back (200). Use POST for resource creation through http://example.com/items that you expect to return a new url as determined by the server. Use PUT for resource creation to http://example.com/items/1234 where you expect the server to honour the client-provided url and create the resource in-place. See rfc2616. POST to a factory resource such as http://example.com/items is often preferred because it allows the server greater freedom to shuffle you off somewhere else based on it's current world view. > I guess a POST is ok for 410 but not for 404 in this case. Hmm, maybe > I should try to avoid that, and just allow another POST to the /items > URL and use the session hash (1234) as a parameter to override the > creation of a new and instead use the one I know about. Thoughts? I think you are talking about renewing session state... <shrug> I personally would think about doing this as a side-effect of the GET request. GET is a safe method, meaning the server can't swear in court later that the client actually wanted to renew their session, but the server is free to revoke the session at any time if it wants to anyway. It is just noticing that the session is still in use and making an internal decision to keep it active. On the other hand, you should perhaps be thinking about what this session is for. REST tends to discourage sessions in general. Without looking at your specific use case I wouldn't like to comment too harshly, but a session can often be avoided. Benjamin
On 1/26/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > I think you are talking about renewing session state... > <shrug> I personally would think about doing this as a side-effect of > the GET request. GET is a safe method, meaning the server can't swear in > court later that the client actually wanted to renew their session, but > the server is free to revoke the session at any time if it wants to > anyway. It is just noticing that the session is still in use and making > an internal decision to keep it active. I am dealing with sessions in this case, but I'm making them RESTful by treating them as resources, and that brings up the interesting point of temporary resources. What happens when you GET a temporary resource that's expired, but you can get back through invocation? A 410 means its gone, 404 means it's not found, 304 not modified, and so forth; there seems to be nothing that says "yes, it's still there, but you need to ask pretty to get it back". Currently I'm just going to let the sessions exsist until asked to be removed, and have the timeout information inside the XML of the GET operation and let applications decide if they're "allowed" to use the info within, as I can't seem to find a RESTful mechanism to use. > On the other hand, you should perhaps be thinking about what this > session is for. REST tends to discourage sessions in general. Without > looking at your specific use case I wouldn't like to comment too > harshly, but a session can often be avoided. These session objects can be created by anyone, from browsers to applications to services, and is a temporary property store with timestamps. They are anonymous, and there is no dependancy on application logic, so it's up to the apps to use them properly, for example a browser needs to get a session, use it with cookies, and check the timeout from time to time, or a cron service uses the session for various cleaning jobs, etc. Never mind that they are sessions. think of them as temporary resources that after a time become unavailable (but not gone), and you can ask to get them back. Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
Alan: nice work! Is there a higher resolution version of: http://farm1.static.flickr.com/175/367132415_ee9c3b9b40_b_d.jpg available? //Ed
Alexander Johannesen wrote: > These session objects can be created by anyone, from browsers to > applications to services, and is a temporary property store with > timestamps. They are anonymous, and there is no dependancy on > application logic, so it's up to the apps to use them properly, for > example a browser needs to get a session, use it with cookies, and > check the timeout from time to time, or a cron service uses the > session for various cleaning jobs, etc. > > Never mind that they are sessions. think of them as temporary > resources that after a time become unavailable (but not gone), and you > can ask to get them back. In my mind once sessions are finished they're finished for good. not turned off. I think you're dealing with two types of resource - one is a session (transient), another is (maybe) the state of a client. cheers Bill
On Sat, 2007-01-20 at 19:57 +0000, Bill de hOra wrote: > Benjamin Carlyle wrote: > > I have been thinking about the idempotency of POST lately, and the > > exchange with Steve Bjorg has prompted me to write about it. My > current > > direction is to treat a POST of null to a factory resource as > > idempotent. > > [...] > > Thoughts? > Start here: > http://www.mnot.net/drafts/draft-nottingham-http-poe-00.txt > http://www.dehora.net/doc/httplr/draft-httplr-01.html Thanks for the links. Problems with POE: * The specification does not cover how the POE resource is created. Presumably, it is through a POST which could lead to a chicken and egg situation. My proposal to POST null is designed to create the temporary resource from a factory with a stable url, solving this problem. * POST is consumed on the POE resource, and can't be used for other normal purposes. For example, I can't use this mechanism to create a factory resource. OTOH, the POST null approach only deals with creation of the resource. POST is available for normal uses on the created resource. On HTTPLR: * This seems fairly similar to the POST null proposal. In fact, step one of the upload protocol appears to be a null POST with step two being the PUT. I'm not sure about the explicit client DELETE of the channel, and thus I'm not sure about the need for the channel concept at all. A server must be free to time out the new resource in case of premature client failure, so you can't guarantee delivery unless the request sequence completes before this timeout. In light of this I prefer the channel concept to simply be replaced by the concept of a created resource. * I think there is some danger that the message could be read to be a whole HTTP request or SOAP request or other request that needs to be delivered. That interpretation doesn't smell right to me, and I think that any suggestion of message transfer should be explicitly avoided in favour of state transfer. * The GET appears OK, though clearly the server must also be free to purge old messages once its buffers start to fill up. This is a matter of summarisation that also affects pub/sub mechanisms. I think my suggestion remains intact and I'll look to implementing it where at most once delivery is required in my architecture. The current factory resource pattern where a new resource is created by POSTing its state to the factory is replaced by a two-phase operation. Phase one is the same POST but with no content: >> POST http://example.com/resourcefactory << 201 Created << Location: http://example.com/theresource Phase two is to PUT the content you would have otherwise POSTed: >> PUT http://example.com/theresource >> <<the resource state>> << 200 OK Either step can be repeated safely so long as the client doesn't backtrack by dong a POST after a PUT has been sent and so long as the server doesn't time out the created resource before the client is sure the creation has occured. If 200 is returned from the POST a reliable delivery is not possible and a regular POST should be attempted. To support a reliable POST the server must create a temporary resource from the POST. Benjamin.
On 27.01.2007, at 06:32, Benjamin Carlyle wrote: > > Problems with POE: > * The specification does not cover how the POE resource is created. > Presumably, it is through a POST which could lead to a chicken and egg > situation. Yes to both. I had another thought the other day: The model I am thinking about to achieve POE is based on the use of Atom and the inclusion of an ID in the POST body (or an HTTP header). POE aware clients would receive the ID to use from a factory, non-POE aware clients would just do the normal POST. The problem with getting the IDs is of course that it violates GET's idempotency (every GET will result in a new ID). OTH, IDs are cheap and if we are not talking about hundreds of GETs per second to the factory, it is propably not that bad at all. It is after all just a resource changing over time. Another thought I had was that the client could probably create the ID itself (e.g. a tag: URI) and a new HTTP return code could indicate to the client that the ID was't suitable (together with a good one in the payload). Thoughts? Jan > My proposal to POST null is designed to create the temporary > resource from a factory with a stable url, solving this problem. > * POST is consumed on the POE resource, and can't be used for other > normal purposes. For example, I can't use this mechanism to create a > factory resource. OTOH, the POST null approach only deals with > creation > of the resource. POST is available for normal uses on the created > resource. > > On HTTPLR: > * This seems fairly similar to the POST null proposal. In fact, > step one > of the upload protocol appears to be a null POST with step two > being the > PUT. I'm not sure about the explicit client DELETE of the channel, and > thus I'm not sure about the need for the channel concept at all. A > server must be free to time out the new resource in case of premature > client failure, so you can't guarantee delivery unless the request > sequence completes before this timeout. In light of this I prefer the > channel concept to simply be replaced by the concept of a created > resource. > * I think there is some danger that the message could be read to be a > whole HTTP request or SOAP request or other request that needs to be > delivered. That interpretation doesn't smell right to me, and I think > that any suggestion of message transfer should be explicitly > avoided in > favour of state transfer. > * The GET appears OK, though clearly the server must also be free to > purge old messages once its buffers start to fill up. This is a matter > of summarisation that also affects pub/sub mechanisms. > > I think my suggestion remains intact and I'll look to implementing it > where at most once delivery is required in my architecture. The > current > factory resource pattern where a new resource is created by POSTing > its > state to the factory is replaced by a two-phase operation. Phase > one is > the same POST but with no content: >>> POST http://example.com/resourcefactory > << 201 Created > << Location: http://example.com/theresource > Phase two is to PUT the content you would have otherwise POSTed: >>> PUT http://example.com/theresource >>> <<the resource state>> > << 200 OK > Either step can be repeated safely so long as the client doesn't > backtrack by dong a POST after a PUT has been sent and so long as the > server doesn't time out the created resource before the client is sure > the creation has occured. If 200 is returned from the POST a reliable > delivery is not possible and a regular POST should be attempted. To > support a reliable POST the server must create a temporary resource > from > the POST. > > Benjamin. > > > > > Yahoo! Groups Links > > >
On 27.01.2007, at 11:32, Jan Algermissen wrote: > > I had another thought the other day: Well, to be fair, credits for this actually go to Phil: http://www.imc.org/atom-protocol/mail-archive/msg08072.html Jan > > The model I am thinking about to achieve POE is based on the use of > Atom and the inclusion of an ID in the POST body (or an HTTP header). > POE aware clients would receive the ID to use from a factory, non-POE > aware clients would just do the normal POST. > > The problem with getting the IDs is of course that it violates GET's > idempotency (every GET will result in a new ID). OTH, IDs are cheap > and if we are not talking about hundreds of GETs per second to the > factory, it is propably not that bad at all. It is after all just a > resource changing over time. > > Another thought I had was that the client could probably create the > ID itself (e.g. a tag: URI) and a new HTTP return code could indicate > to the client that the ID was't suitable (together with a good one in > the payload). > > Thoughts? > > Jan > > > >> My proposal to POST null is designed to create the temporary >> resource from a factory with a stable url, solving this problem. >> * POST is consumed on the POE resource, and can't be used for other >> normal purposes. For example, I can't use this mechanism to create a >> factory resource. OTOH, the POST null approach only deals with >> creation >> of the resource. POST is available for normal uses on the created >> resource. >> >> On HTTPLR: >> * This seems fairly similar to the POST null proposal. In fact, >> step one >> of the upload protocol appears to be a null POST with step two >> being the >> PUT. I'm not sure about the explicit client DELETE of the channel, >> and >> thus I'm not sure about the need for the channel concept at all. A >> server must be free to time out the new resource in case of premature >> client failure, so you can't guarantee delivery unless the request >> sequence completes before this timeout. In light of this I prefer the >> channel concept to simply be replaced by the concept of a created >> resource. >> * I think there is some danger that the message could be read to be a >> whole HTTP request or SOAP request or other request that needs to be >> delivered. That interpretation doesn't smell right to me, and I think >> that any suggestion of message transfer should be explicitly >> avoided in >> favour of state transfer. >> * The GET appears OK, though clearly the server must also be free to >> purge old messages once its buffers start to fill up. This is a >> matter >> of summarisation that also affects pub/sub mechanisms. >> >> I think my suggestion remains intact and I'll look to implementing it >> where at most once delivery is required in my architecture. The >> current >> factory resource pattern where a new resource is created by POSTing >> its >> state to the factory is replaced by a two-phase operation. Phase >> one is >> the same POST but with no content: >>>> POST http://example.com/resourcefactory >> << 201 Created >> << Location: http://example.com/theresource >> Phase two is to PUT the content you would have otherwise POSTed: >>>> PUT http://example.com/theresource >>>> <<the resource state>> >> << 200 OK >> Either step can be repeated safely so long as the client doesn't >> backtrack by dong a POST after a PUT has been sent and so long as the >> server doesn't time out the created resource before the client is >> sure >> the creation has occured. If 200 is returned from the POST a reliable >> delivery is not possible and a regular POST should be attempted. To >> support a reliable POST the server must create a temporary resource >> from >> the POST. >> >> Benjamin. >> >> >> >> >> Yahoo! Groups Links >> >> >> > > > > > Yahoo! Groups Links > > >
On 1/27/07, Jan Algermissen <algermissen1971@...> wrote: > > The problem with getting the IDs is of course that it violates GET's > idempotency (every GET will result in a new ID). OTH, IDs are cheap > and if we are not talking about hundreds of GETs per second to the > factory, it is propably not that bad at all. It is after all just a > resource changing over time. > Breaking the idempotency of GET is very bad ... Why not have a 'factory' GET, that redirects to a new location every time, e.g. --> GET /id/new <-- 302 Found Location: /id/abc123 --> GET /id/new <-- 302 Found Location: /id/abc124 ... etc Regards, Alan Dean
On 27.01.2007, at 11:49, Alan Dean wrote: > On 1/27/07, Jan Algermissen <algermissen1971@...> wrote: >> >> The problem with getting the IDs is of course that it violates GET's >> idempotency (every GET will result in a new ID). OTH, IDs are cheap >> and if we are not talking about hundreds of GETs per second to the >> factory, it is propably not that bad at all. It is after all just a >> resource changing over time. >> > > Breaking the idempotency of GET is very bad ... Yes, but a) it is the server's responsibility and b) it is very controlled in this case. (Is there any harm in those HTML pages that show N-times visited numbers? The foo.org/collection/latestPOEId resource would just behave the same way. > > Why not have a 'factory' GET, that redirects to a new location > every time, e.g. But the side effect would still be there. Jan > > --> > GET /id/new > > <-- > 302 Found > Location: /id/abc123 > > --> > GET /id/new > > <-- > 302 Found > Location: /id/abc124 > > ... etc > > Regards, Alan Dean
Jan Algermissen wrote: > The problem with getting the IDs is of course that it violates GET's > idempotency (every GET will result in a new ID). OTH, IDs are cheap and > if we are not talking about hundreds of GETs per second to the factory, > it is propably not that bad at all. It is after all just a resource > changing over time The problem in my mind GET a cached ID and sharing it with someone else. If you want to serve IDs have clients use POST. cheers Bill
Benjamin Carlyle wrote: > > On HTTPLR: > * This seems fairly similar to the POST null proposal. In fact, step one > of the upload protocol appears to be a null POST with step two being the > PUT. I'm not sure about the explicit client DELETE of the channel, and > thus I'm not sure about the need for the channel concept at all. A > server must be free to time out the new resource in case of premature > client failure, so you can't guarantee delivery unless the request > sequence completes before this timeout. Perhaps. You could guarantee delivery by having the client restart the exchange if the server indicates a timeout (or generally ending the exchange). It's another confused client scenario. IME, the operational scenario are exchanges that are started but a client will never finish (phantoms), which is why I documented them. In light of this I prefer the > channel concept to simply be replaced by the concept of a created > resource. Maybe. Where you're worried about the reality of timeouts, I'm worried about the reality of HTTPLR acting as a gateway for MOMs. > * I think there is some danger that the message could be read to be a > whole HTTP request or SOAP request or other request that needs to be > delivered. That interpretation doesn't smell right to me, and I think > that any suggestion of message transfer should be explicitly avoided in > favour of state transfer. Again, maybe. Did you see any testable/operational consequences? > * The GET appears OK, though clearly the server must also be free to > purge old messages once its buffers start to fill up. This is a matter > of summarisation that also affects pub/sub mechanisms. > > I think my suggestion remains intact and I'll look to implementing it > where at most once delivery is required in my architecture. Fair enough; at most isn't the same guaranteed. I had thought once about clients and serves being able to negitiate the SLA, but profile negotiation in internet protocols never seems to work out just so. cheers Bill
On Jan 27, 2007, at 7:13 AM, Bill de hOra wrote: > Jan Algermissen wrote: > >> The problem with getting the IDs is of course that it violates GET's >> idempotency (every GET will result in a new ID). OTH, IDs are >> cheap and >> if we are not talking about hundreds of GETs per second to the >> factory, >> it is propably not that bad at all. It is after all just a resource >> changing over time > > > The problem in my mind GET a cached ID and sharing it with someone > else. > If you want to serve IDs have clients use POST. > Where does it say that GET is idempotent? My understanding was that GET is "safe," but not necessarily idempotent. Returning a new ID each time you do a GET seems plenty safe to me. People used to do that all the time when they put a "This page has been views N time" counters on their pages. And can't you simply put a cache-control header that says don't cache and must revalidate to prevent people getting duplicate IDs? If someone backs up with the back button on a browser, they might attempt the same ID twice, but I believe detecting that is the point of the proposal--i.e., to prevent the same thing from being posted twice. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
On 1/27/07, Bill Venners <bv-svp@...> wrote: > > Where does it say that GET is idempotent? http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.2 "9.1.2 Idempotent Methods Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent." Alan Dean
Bill Venners wrote: > > On Jan 27, 2007, at 7:13 AM, Bill de hOra wrote: > >> Jan Algermissen wrote: >> >>> The problem with getting the IDs is of course that it violates GET's >>> idempotency (every GET will result in a new ID). OTH, IDs are cheap and >>> if we are not talking about hundreds of GETs per second to the factory, >>> it is propably not that bad at all. It is after all just a resource >>> changing over time >> >> >> The problem in my mind GET a cached ID and sharing it with someone else. >> If you want to serve IDs have clients use POST. >> > Where does it say that GET is idempotent? My understanding was that GET > is "safe," but not necessarily idempotent. Returning a new ID each time > you do a GET seems plenty safe to me. Hi Bill, It's not that. The safeness of GET can work against you, if the server is required to issue distinct ids. client A GET -------[cache] -------> [oserver] 200 Ok id:23098765678 client B GET -------[cache] x 200 Ok id:23098765678 client B was served from cache, so now client A and B are working with the same ID. Not good. Hence, use POST for this kind of pattern, unless you have some other means of distinguishing A and B requests. cheers Bill
Bill, On 27.01.2007, at 16:13, Bill de hOra wrote: > Jan Algermissen wrote: > >> The problem with getting the IDs is of course that it violates GET's >> idempotency (every GET will result in a new ID). OTH, IDs are >> cheap and >> if we are not talking about hundreds of GETs per second to the >> factory, >> it is propably not that bad at all. It is after all just a resource >> changing over time > > > The problem in my mind GET a cached ID and sharing it with someone > else. > Yes, that might be a problem (an ill-behaving cache should not confuse other peoples POSTs). > If you want to serve IDs have clients use POST. Yes, but then we have the chicken-egg problem that Benjamin mentioned. If one wants to do reliable POST, then using simple POST to get the IDs is probably a bit brain dead, eh? OTH, if the client understands what the extra-capability of support for POEs means (and obviously it has to do so) then part of that understanding could be that myriads of retried normal POSTs to the factory are ok by definition. What I don't see yet is whether this 'overwriting' of HTTP semantics is ok or bad or architecturally deadly. I recall that WAKA might introduce 'Request Identifiers', is that something that could be used here? Would such identifiers be generated by the client and would re-POSTs use the same identifier? Or maybe letting the client supply the ID for the POST (and have the server complain if the ID is bad) is indeed the right thing to do, thoughts? Thanks, Jan > > cheers > Bill > > > > > > > > > > > > > > Yahoo! Groups Links > > >
On 2007/01/27, at 4:32 PM, Benjamin Carlyle wrote: > On Sat, 2007-01-20 at 19:57 +0000, Bill de hOra wrote: >> Benjamin Carlyle wrote: >>> I have been thinking about the idempotency of POST lately, and the >>> exchange with Steve Bjorg has prompted me to write about it. My >> current >>> direction is to treat a POST of null to a factory resource as >>> idempotent. >>> [...] >>> Thoughts? >> Start here: >> http://www.mnot.net/drafts/draft-nottingham-http-poe-00.txt >> http://www.dehora.net/doc/httplr/draft-httplr-01.html > > Thanks for the links. > > Problems with POE: > * The specification does not cover how the POE resource is created. That's intentional :) > Presumably, it is through a POST which could lead to a chicken and egg > situation. Sending a POST to get the form is one way, but not the only. Another would be to use GET, and assure that the response isn't cacheable; there are a number of ways to assure that the links you've given out don't collide without keeping a list of them, the easiest involving timestamps (along with some other information) or GUIDs. Yet another would be to delegate creating the links to the client, e.g., using JavaScript. I don't see how sending a POST to get the POE link leads to a chicken- and-egg problem, unless you also need the "get the form" operation (which really should be a GET) to be reliable as well. Even if you did, you can always bootstrap it with one that isn't required to be reliable. > * POST is consumed on the POE resource, and can't be used for other > normal purposes. For example, I can't use this mechanism to create > a factory resource. 201 Created + Location. Cheers, P.S. I'm happy to update/improve upon POE if people think it's worth walking down that path. -- Mark Nottingham http://www.mnot.net/
On 1/25/07, Edward Summers <ehs@...> wrote: > Alan: nice work! Is there a higher resolution version of: > > http://farm1.static.flickr.com/175/367132415_ee9c3b9b40_b_d.jpg > > available? Yes. I don't know why, but flickr lowered the resolution of the uploaded file. I will upload a full-res copy over the weekend to my site and publish the URL here. Alan
On 1/25/07, Edward Summers <ehs@...> wrote: > Alan: nice work! Is there a higher resolution version of: > > http://farm1.static.flickr.com/175/367132415_ee9c3b9b40_b_d.jpg > > available? A full-res jpg can be found at http://thoughtpad.net/who/alan-dean/image/http-headers-status.jpg Regards, Alan Dean
On 1/28/07, Dave Pawson <dave.pawson@...> wrote: > > Yes. I don't know why, but flickr lowered the resolution of the uploaded file. > > > > I will upload a full-res copy over the weekend to my site and publish > > the URL here. > > <greedy>An SVG format would make it even more useful Alan</greedy> > > I'm wondering if flickr is hurting of storage space, hence don't like > large files? The diagram was edited in Visio. I will check tomorrow to see if it can export in SVG format. Regards, Alan Dean
Jan Algermissen wrote: > > > Bill, > > > If you want to serve IDs have clients use POST. > > Yes, but then we have the chicken-egg problem that Benjamin > mentioned. If one wants to do reliable POST, then using simple POST > to get the IDs is probably a bit brain dead, eh? Not brain dead, it will work, as the point is you don't need a reliable single POST to start a reliable exchange - reliability is introduced using a sequence of requests and key for communicating shared state (HTTP is an asymmetric protocol, so it isn't possible to this in one step anyway). If you request an ID and fail to get a response, then request again. The main thing is that the ID is not shared with other clients. Without that the server can't reason correctly about exchange state. > What I don't see yet is whether this 'overwriting' of HTTP semantics > is ok or bad or architecturally deadly. > > I recall that WAKA might introduce 'Request Identifiers', is that > something that could be used here? Would such identifiers be > generated by the client and would re-POSTs use the same identifier? HTTPLR uses URLs to identify the message exchange (that is, the ids are URLs). That's for management reasons - you want to be able to distinguish between the exchange state and what's being exchanged. cheers Bill
Mark Nottingham wrote: > I don't see how sending a POST to get the POE link leads to a > chicken-and-egg problem, unless you also need the "get the form" > operation (which really should be a GET) to be reliable as well. Even if > you did, you can always bootstrap it with one that isn't required to be > reliable. I agree. I think POE is sound, formally speaking. The chicken and egg problem I've see is that, because HTTP is asymmetric (clients always initiate), then it's possible that either the client or the server gets confused as to where things are at because the client never got the response, or never got the response in time. I assumed that out of HTTPLR by saying infinite requests will result in infinite responses. Protocol axioms are wonderful that way ;) cheers Bill
On 28.01.2007, at 19:18, Bill de hOra wrote: > If you request an ID and fail to get a response, then > request again. Well, ok. But this behavior is only appropriate if suggested as part of the spec. It contradicts HTTP in the sense that you as a client should know that there might be side effects, so re-POSTing is somewhat dangerous. It is the special character of the POE factory that the re-POSTing side effect is of no concern. Thanks, Jan
Jan Algermissen wrote: > > On 28.01.2007, at 19:18, Bill de hOra wrote: > >> If you request an ID and fail to get a response, then >> >> request again. >> > > Well, ok. But this behavior is only appropriate if suggested as part of > the spec. It contradicts HTTP in the sense that you as a client should > know that there might be side effects, so re-POSTing is somewhat > dangerous. It is the special character of the POE factory that the > re-POSTing side effect is of no concern. I see what you're saying. But if the client doesn't get a response, or gets a non-2xx response what should it do, according to HTTP? cheers Bill
On 28.01.2007, at 21:17, Bill de hOra wrote: > I see what you're saying. But if the client doesn't get a response, or > gets a non-2xx response what should it do, according to HTTP? At least not go into "Brute Force Retry Mode" :-) But I guess in this case it is ok. For my part, I'll follow the POST- to-POE-collection-factory road. Thanks for all the comments - this thing is trickier than it first appears. Jan
On 1/28/07, Dave Pawson <dave.pawson@...> wrote: > > <greedy>An SVG format would make it even more useful Alan</greedy> An svg version now available at http://thoughtpad.net/who/alan-dean/image/http-headers-status.svg as well as the original http://thoughtpad.net/who/alan-dean/image/http-headers-status.jpg Regards, Alan Dean
On 29/01/07, Alan Dean <alan.dean@...> wrote: > An svg version now available at > > http://thoughtpad.net/who/alan-dean/image/http-headers-status.svg Views great in Batik. Pan and zoom without loss of clarity! Thanks Alan. Very useful. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
Alan Dean wrote: > > > On 1/28/07, Dave Pawson <dave.pawson@... > <mailto:dave.pawson%40gmail.com>> wrote: > > > > <greedy>An SVG format would make it even more useful Alan</greedy> > > An svg version now available at > > http://thoughtpad.net/who/alan-dean/image/http-headers-status.svg If would be great if you could also add a namespace declaration for XLink. This will make it work in Firefox :-) xmlns:xlink="http://www.w3.org/1999/xlink" /niklas
On 1/29/07, Niklas Gustavsson <niklas@...> wrote: > > If would be great if you could also add a namespace declaration for > XLink. This will make it work in Firefox :-) > > xmlns:xlink="http://www.w3.org/1999/xlink" Fixed up. Thanks for the tip :-) Alan
Alan Dean wrote: > An svg version now available at > > http://thoughtpad.net/who/alan-dean/image/http-headers-status.svg I still dispute the allowing POST to non-existent resources. If it doesn't exist, how can you POST into it. 404 or 410 IMO. Which incidentally is another condition to put on that. Hence, if anyone on screen readers, HTMLifying clients or similar will forgive my ASCII-art. PUT? ->true -> (same as previous) | |false | KNOWN TO HAVE EXISTED PREVIOUSLY -> true -> 410 Gone | |false | 404 File Not Found I'd also add that some statuses can be calculated before all of this (e.g. we can sometimes say that /1/2/3 would map to something that would return a 403 before we even work out exactly what that something is and whether it exists or not).
On 1/30/07, Jon Hanna <jon@...> wrote: > Alan Dean wrote: > > An svg version now available at > > > > http://thoughtpad.net/who/alan-dean/image/http-headers-status.svg > > > I still dispute the allowing POST to non-existent resources. > > If it doesn't exist, how can you POST into it. 404 or 410 IMO. 404 just means no representations are available (source), and doesn't necessarily mean that there isn't any processing logic there (sink). Most HTML form processors are this kind of resource; try invoking GET on the "action URI" of an arbitrary form. Of course, it's possible and useful (IME) to be able to use GET to represent the state of the sink, but it's not required. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Mark Baker wrote: >> If it doesn't exist, how can you POST into it. 404 or 410 IMO. > > 404 just means no representations are available (source), and doesn't > necessarily mean that there isn't any processing logic there (sink). > Most HTML form processors are this kind of resource; try invoking GET > on the "action URI" of an arbitrary form. If it's there and I can POST to it but not GET it then a GET should return 405 Method Not Allowed with POST in the Allow header. Then I know that it's there and I can POST to it. > Of course, it's possible and useful (IME) to be able to use GET to > represent the state of the sink, but it's not required. Agreed, but a human-readable (or readable by whatever type of client the application is designed to cater for) error message along with the 405 Method Not Allowed could be just as useful.
On 1/30/07, Jon Hanna <jon@...> wrote: > > I still dispute the allowing POST to non-existent resources. I've had a think about this and read the spec again. So far as I can tell, the spec does not forbid POSTing onto a missing resource. So I have remodelled the flow to show two paths, where one permits and the other prevents POSTing to missing resources. > > If it doesn't exist, how can you POST into it. 404 or 410 IMO. > > Which incidentally is another condition to put on that. Hence, if anyone > on screen readers, HTMLifying clients or similar will forgive my ASCII-art. > > PUT? ->true -> (same as previous) > | > |false > | > KNOWN TO HAVE EXISTED PREVIOUSLY -> true -> 410 Gone > | > |false > | > 404 File Not Found > Good feedback - have added 410 Gone. > I'd also add that some statuses can be calculated before all of this > (e.g. we can sometimes say that /1/2/3 would map to something that would > return a 403 before we even work out exactly what that something is and > whether it exists or not). For more completeness, I have added the resolution of status codes prior to testing for the existence of a resource. There is a new version of the activity diagram uploaded. The changes are extensive. For a list of all representations and a version history, see http://thoughtpad.net/who/alan-dean/image/http-headers-status I'm finding this a very useful process - thanks for the feedback everyone :-) Regards, Alan Dean
Hey guys, I'm having a bit of a disagreement with our chief architect (and head development honcho) about whether or not our app is restful and/or what we need to make it so. My first contention is that becuase your average web browser really only implements the GET and POST HTTP methods that you can't really code a human facing website to be really RESTful. Basically, IE is not a RESTful client. Having said that, lets assume that my first suggestion is wrong and that you be RESTful on a human facing website. Or application does online banking. All transactions take multiple steps (at a minimum, there is the form, a confirmation and a reciept, although some transaction like a repeating transfer into some external account have multiple form steps). Our architect says that the RESTful approach is to have a unique URL for every step of the transaction, that doing so correctly encodes all the States as resources. I argue that this is not the case for a few reasons: 1) It doesn't correctly include all the States as error conditions are not considered 2) That its only a coincidence of our implementation (its 5+ years old) that a user needs to make several trips to the server in order to complete a transaction. We could have just as easily built an AJAXy UI which only makes one call to the server. This seems to me to be an architectural smell. 3) That while it could make sense that one would have a series of resources which transform something (in this case form parameters) into some intermediate state until that something is ready to be POSTed to the "real" resource, it seems a contrivance. By contrast, our current implementation hides all the steps of the transaction behind a single URL that the end user sees. The server code (which could be written as a RESTful client for a RESTful online banking service) that the user most directly interacts with is responsible for determining, based on the information provided, what content to send to the browser, either the reciept or some intermediate form. So.. which of us is less wrong? Adam
"fyzixadam" <adam.vandenhoven@...> writes: > I'm having a bit of a disagreement with our chief architect (and head development honcho) > about whether or not our app is restful and/or what we need to make it so. > > My first contention is that becuase your average web browser really only implements the > GET and POST HTTP methods that you can't really code a human facing website to be > really RESTful. Basically, IE is not a RESTful client. It has a javascript engine that can do PUT and DELETE and OPTIONS and anything else you like. That is a well worn pattern for RESTfull clients. > Having said that, lets assume that my first suggestion is wrong and that you be RESTful on > a human facing website. Or application does online banking. All transactions take multiple > steps (at a minimum, there is the form, a confirmation and a reciept, although some > transaction like a repeating transfer into some external account have multiple form steps). > Our architect says that the RESTful approach is to have a unique URL for every step of the > transaction, that doing so correctly encodes all the States as resources. I argue that this is > not the case for a few reasons: > 1) It doesn't correctly include all the States as error conditions are not considered > 2) That its only a coincidence of our implementation (its 5+ years old) that a user needs > to make several trips to the server in order to complete a transaction. We could have just > as easily built an AJAXy UI which only makes one call to the server. This seems to me to be > an architectural smell. > 3) That while it could make sense that one would have a series of resources which > transform something (in this case form parameters) into some intermediate state until that > something is ready to be POSTed to the "real" resource, it seems a contrivance. > > By contrast, our current implementation hides all the steps of the transaction behind a > single URL that the end user sees. The server code (which could be written as a RESTful > client for a RESTful online banking service) that the user most directly interacts with is > responsible for determining, based on the information provided, what content to send to > the browser, either the reciept or some intermediate form. You don't have to encode all the states AS resources. You have to encode the state IN the resources. A common pattern espoused here is the POST, then PUT. It works really well for transactional models. You POST to create a transaction resource... and then PUT it's state to it (perhaps repeatedly). -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
fyzixadam wrote: > My first contention is that becuase your average web browser really only implements the > GET and POST HTTP methods that you can't really code a human facing website to be > really RESTful. Basically, IE is not a RESTful client. Firstly, the most popular modern browsers allow for the full use of all HTTP methods in AJAX (a few bugs aside). Secondly, I don't think being restricted to GET and POST means a client isn't RESTful, it's just restricted in terms of what RESTful features it can use. In certain cases this is appropriate (it makes sense for some clients to be deliberately restricted to GET only - if they are meant to only have a read-only view on the resources in question). As such I'd say that GET and POST can be enough for a fully RESTful application, but not enough for all fully RESTful applications. > Having said that, lets assume that my first suggestion is wrong and that you be RESTful on > a human facing website. Or application does online banking. All transactions take multiple > steps (at a minimum, there is the form, a confirmation and a reciept, although some > transaction like a repeating transfer into some external account have multiple form steps). > Our architect says that the RESTful approach is to have a unique URL for every step of the > transaction, that doing so correctly encodes all the States as resources. I argue that this is > not the case for a few reasons: [snip] I think you are both potentially describing RESTful applications, and potentially not (if other matters not stated break the constraints). Which of you is describing the better RESTful application is another matter. In terms of both your questions here I get the sense that you are saying "there is a RESTful way of doing things". It's more accurate to say "some ways of doing things are RESTful, because they don't break the constraints of REST". In particular, something being RESTful doesn't mean it takes full advantage of what REST offers, but rather it doesn't try to take advantage of something REST doesn't offer. More importantly, as big a fan of REST as I am, something being RESTful doesn't automatically mean it doesn't suck. The not-sucking constraint is both more important than any constraint in REST and often harder to meet :)
..from Larry O'Brien called "Give it a REST", decrying the failure of WS-* in the real world. http://www.sdtimes.com/printArticle/column-20070115-02.html Probably preaching to the converted on this list, but such material is good to have in your kit bag, when fending off the forces of wsevil! ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
And there's even a t-shirt for you... http://www.cafepress.com/rest "Web Services - Just give it a REST" http://www.cafepress.com/rest.4541012 (I put these up back in mid-2002, my how time flies) I think Paul Prescod is the only one that bought - but he got the obscene one (which I've taken down). > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Andrzej > Jan Taramina > Sent: Wednesday, January 31, 2007 12:55 PM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Great article... > > ..from Larry O'Brien called "Give it a REST", decrying the > failure of WS-* in the real world. > > http://www.sdtimes.com/printArticle/column-20070115-02.html > > Probably preaching to the converted on this list, but such > material is good to have in your kit bag, when fending off > the forces of wsevil! ;-) > > > Andrzej Jan Taramina > Chaeron Corporation: Enterprise System Solutions > http://www.chaeron.com > > > > > Yahoo! Groups Links > > >
S. Mike Dierken wrote: > I think Paul Prescod is the only one that bought - but he got the obscene > one (which I've taken down). What ever happened to Paul Prescod anyway? $ curl -si http://www.prescod.net | grep -i modified Last-Modified: Thu, 02 Jan 2003 19:10:29 GMT His HTTP and REST lit had a tremendous impact on me. -- Ryan
I think he's a director of program management somewhere, I think he got back into markup technologies. He might be at xmetal.com - I haven't spoken with him in a while. We used to work together long time back, I should give him a call... > -----Original Message----- > From: Ryan Tomayko [mailto:rtomayko@...] > Sent: Wednesday, January 31, 2007 9:59 PM > To: S. Mike Dierken > Cc: andrzej@...; rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Great article... > > S. Mike Dierken wrote: > > I think Paul Prescod is the only one that bought - but he got the > > obscene one (which I've taken down). > > What ever happened to Paul Prescod anyway? > > $ curl -si http://www.prescod.net | grep -i modified > Last-Modified: Thu, 02 Jan 2003 19:10:29 GMT > > His HTTP and REST lit had a tremendous impact on me. > > -- > Ryan
Hello, I have a question about the if-match header. As a client, I would like to PUT a new representation of a resource that I get in the past with entity tag "A". Therefore, I send a PUT request with the "if-match" header. Unfortunately, this resource has been deleted. What could be the behaviour of the server ? Here is an excerpt of the HTTP rfc 2616 : If none of the entity tags match, or if "*" is given and no current entity exists, the server MUST NOT perform the requested method". It could be understood in 2 ways : - PUT the new representation because "*" has not been given - send 412 status because "If none of the entity tags match" may say that effectively the "A" entity tag has been matched. Best regards, Thierry Boileau
Thierry wrote: > Hello, > > I have a question about the if-match header. > As a client, I would like to PUT a new representation of a resource > that I get in the past with entity tag "A". Therefore, I send a PUT > request with the "if-match" header. > Unfortunately, this resource has been deleted. What could be the > behaviour of the server ? 412 because the If-Match has failed. At the client you know that this 412 means one of the following: 1. The resource was changed and hence has a different entity-tag. 2. The resource was deleted. 3. Another precondition failed. Assuming there were no other preconditions (if there are then this is due to something your client is doing, and hence the extra complexity is coming from your application domain rather than the spec, anyway it's out of scope for the If-Match matter), then a GET on the URI will return one of: 1. A 301 response because the old resources was moved. Deal with this as appropriate. 2. A 2xx response with a new entity representing the resource with a new E-Tag. Deal with this as appropriate (most likely warning the user that they are going to over-write changes and asking how they would like to proceed). 3. A 404 or 410 response indicating the deletion that happened in your hypothetical example (but which we cannot assume happened). Deal with this as appropriate (most likely either retrying the PUT without an If-Match or else warning the user about the deletion and asking how they would like to proceed). 4. A 5xx or a 4xx (other than 404 or 410) response indicating an error.
412 Precondition Failed I have published my interpretation of the HTTP spec regarding header resolution at http://thoughtpad.net/who/alan-dean/image/http-headers-status Regards, Alan Dean On 2/1/07, Thierry <thboileau@...> wrote: > > I have a question about the if-match header. > As a client, I would like to PUT a new representation of a resource > that I get in the past with entity tag "A". Therefore, I send a PUT > request with the "if-match" header. > Unfortunately, this resource has been deleted. What could be the > behaviour of the server ? > Here is an excerpt of the HTTP rfc 2616 : > If none of the entity tags match, or if "*" is given and no current > entity exists, the server MUST NOT perform the requested method".
I'm sure this is a total noob question but I haven't been able to find an answer :) My beloved has started learning ancient Greek. So I'm planning to set up a simple vocabulary training program ... I imagine a central ressource called "trainer" who's picking exercises randomly. A request to the trainer would mean: "give me the next exercise you want me to complete". The trainer responds with one of many exercises most of the time. After having completed a certain number of exercises the trainee would be directed to some kind of summary or statistics. How should I model this? As the response of the trainer ressource is meant to change randomly for each request, should I POST to this ressource? Or should I GET it and redirect the client through a location header? Or do something else that I'm missing? This question could probably be reduced to "How do I RESTfully throw a dice?" Thanks in advance! -- sven fuchs fon: +49 (58 45) 98 89 58 artweb design fax: +49 (58 45) 98 89 57 breite strae 65 www: http://www.artweb-design.de de-29468 bergen mail: svenfuchs@artweb-design.de
Thierry, Thank you for your kind comments. I am based my interpreation of the handling of "If-Match: *" where a resource does not exist on the following paragraph: "The meaning of "If-Match: *" is that the method SHOULD be performed if the representation selected by the origin server [...] exists, and MUST NOT be performed if the representation does not exist." from http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.24 Unfortunately, the spec seems to treat this header value as a special case - it does not specify what to do given any other conditional header when the resource is missing. I therefore took this to mean that any other conditional header has no effect when the resource is missing. This is simply my interpretation, of course, and would like to hear the opinions of others. If an alternative interpretation gathers a consensus then I will amend the diagram. Regards, Alan Dean On 2/1/07, Thierry Boileau <thboileau@...> wrote: > Hello Alan, > > First, I have to thank you for your great effort providing this so > comprehensive activity diagram. This is really great! > > However, I have one question about the case where no resource is found, > but an "If-match" header has been provided. > If I understand your diagram, you expect a 412 status [precondition > failed] when no resource exists and if and only if the "If-match" > header's value equals to "*". > I was not sure myself, therefore I've checked the HTTP RFC (which is not > really clear), and finally posted a mail to the "REST discuss" yahoo groups. > It seems that the test is : when no resource exists and if an "If-match" > header has been provided (not only "*"). > > What are you tinking about? > Best regards, > Thierry Boileau
On 2/1/07, Sven Fuchs <svenfuchs@...> wrote:
>
> This question could probably be reduced to "How do I RESTfully throw
> a dice?"
You could implement a dice throw as follows:
-->
GET /dice/throw
<--
302 Found
Location: /dice/4
{representation}
... perhaps you may want the client to specify the number of sides on the dice:
-->
GET /dice/throw/20
<--
302 Found
Location: /dice/11
{representation}
Regards, Alan Dean
Sven Fuchs wrote: > This question could probably be reduced to "How do I RESTfully throw > a dice?" Assuming you have no security requirements upon "cheating" this is a question of one activity (just what that activity is is your very question) mapping to one of a set of resources. Since there is no issue in these resources being independently identified they can be identified by URIs that one can GET from and then POST answers to. The "dice" can indeed be mapped as a redirect. It's perfectly okay to map it as a redirect from a GET - that entities returned from resources change over time is one of the inherent features of REST and this can just as easily be between every single request as it can over more clearly marked times of modification. Hence a 307 Temporary Redirect from the "dice" resource would be apposite. Note that by default 307 responses are not cacheable (which suits your purposes) though there is nothing to stop the target of the redirect being cached (which is a good thing, if chance does mean a repeat on the dice there's no reason why the cache shouldn't be used). If you need to guard against cheating then a different model would be required beyond a purely simple 307.
Alan Dean wrote: > Unfortunately, the spec seems to treat this header value as a special > case - it does not specify what to do given any other conditional > header when the resource is missing. It does indeed treat this value as a special case, though when you consider as a wild card it's "specialness" is reduced. The spec *does* specify what to do when the resource is missing, but not as explicitly. Consider If-Match: "abcde" when the resource is missing. Does this match an available resource? The answer is clearly "no". Now, when we consider the other rule about "If the request would, without the If-Match header field, result in anything other than a 2xx or 412 status, then the If-Match header MUST be ignored." this means that if the request is GET we will get a 404, but with PUT we will get a 412 (I'm simplifying my assuming that there are no other factors affecting the status code returned).
On 2/1/07, Jon Hanna <jon@...> wrote:
>
> Alan Dean wrote:
> > Unfortunately, the spec seems to treat this header value as a special
> > case - it does not specify what to do given any other conditional
> > header when the resource is missing.
>
> It does indeed treat this value as a special case, though when you
> consider as a wild card it's "specialness" is reduced.
>
> The spec *does* specify what to do when the resource is missing, but not
> as explicitly.
>
> Consider If-Match: "abcde" when the resource is missing.
>
> Does this match an available resource? The answer is clearly "no".
>
> Now, when we consider the other rule about "If the request would,
> without the If-Match header field, result in anything other than a 2xx
> or 412 status, then the If-Match header MUST be ignored." this means
> that if the request is GET we will get a 404, but with PUT we will get a
> 412 (I'm simplifying my assuming that there are no other factors
> affecting the status code returned).
If I understand your contention correctly, you are saying that:
If the resource is missing;
and any value at all is specified in If-Match
then return 412 Precondition Failed
I can see the logic of how you reached that conclusion, but I am left
wondering "if that is the case, why does the spec bother to stipulate
the case of the wildcard - why not stipulate 'any' instead?"
Hmmmmm.....
Alan
Alan Dean wrote: > If I understand your contention correctly, you are saying that: > > If the resource is missing; > and any value at all is specified in If-Match > then return 412 Precondition Failed Nope. If the resource is missing and any value at all is specified in If-Match and request would otherwise result in 2xx result the 412 Precondition Failed. > I can see the logic of how you reached that conclusion, but I am left > wondering "if that is the case, why does the spec bother to stipulate > the case of the wildcard - why not stipulate 'any' instead?" Because that's the only condition underwhich * causes a 412. If-Match: "abcde" can also cause a 412 if there is an entity to return, but none with that E-tag OR if there is no entity at all. If-Match: * can *only* cause a 412 if there is no entity to return.
On 2/1/07, Sven Fuchs <svenfuchs@...> wrote: > This question could probably be reduced to "How do I RESTfully throw > a dice?" How about; GET /trainer -> 200 Ok Content-Type: text/uri-list http://example.org/activity/9 Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Hello, This is not strictly speaking related to REST per se, but... perhaps someone could point me to the right direction nonetheless... To make a long story short, I'm a bit confused on how the different HTTP entity transformations work together... or not. In other words, from an HTTP server perspective, how does one properly combine things like entity tags, conditional requests, ranges and content encoding to name a few? Even after going through Jeffrey C. Mogul's "Clarifying the Fundamentals of HTTP" and such, I'm not the wiser :P http://www2002.org/CDROM/refereed/444.pdf Any pointers, guidelines, opinions or advises very welcome :) Thanks in advance. Cheers, PA.
PA schrieb: > > > Hello, > > This is not strictly speaking related to REST per se, but... perhaps > someone could point me to the right direction nonetheless. .. > > To make a long story short, I'm a bit confused on how the different > HTTP entity transformations work together... or not. In other words, > >from an HTTP server perspective, how does one properly combine things > like entity tags, conditional requests, ranges and content encoding to > name a few? > > Even after going through Jeffrey C. Mogul's "Clarifying the > Fundamentals of HTTP" and such, I'm not the wiser :P > > http://www2002. org/CDROM/ refereed/ 444.pdf > <http://www2002.org/CDROM/refereed/444.pdf> > > Any pointers, guidelines, opinions or advises very welcome :) Here's your pointer: (:-) come over to the (former) HTTP WG'smailing list: <http://lists.w3.org/Archives/Public/ietf-http-wg/>. Best regards, Julian
> I imagine a central ressource called "trainer" who's picking > exercises randomly. A request to the trainer would mean: > "give me the next exercise you want me to complete". The > trainer responds with one of many exercises most of the time. > After having completed a certain number of exercises the > trainee would be directed to some kind of summary or statistics. > > How should I model this? > > As the response of the trainer ressource is meant to change > randomly for each request, should I POST to this ressource? > Or should I GET it and redirect the client through a location > header? Or do something else that I'm missing? The response is not meant to change randomly - you said "... want me to complete". Until you've indicated that your state is 'complete', then the response should remain unchanged (in order to take into consideration the failure to retrieve the initial response completely).
Here's another approach - each lesson retrieved has a link to the next lesson. The client decides when it is complete, then retrieves the next lesson. That next lesson has yet another link to yet another lesson. And so on. GET /lessons/99 200 Ok Content-Type: application/socratic.lesson+xml <lesson next='/lessons/47'> some lessons to be learned </lesson> > -----Original Message----- > From: S. Mike Dierken [mailto:dierken@...] > Sent: Thursday, February 01, 2007 9:23 PM > To: 'Sven Fuchs'; 'rest-discuss@yahoogroups.com' > Subject: RE: [rest-discuss] How do I RESTfully throw a dice? > (or: randomized responses) > > > I imagine a central ressource called "trainer" who's > picking exercises > > randomly. A request to the trainer would mean: > > "give me the next exercise you want me to complete". The trainer > > responds with one of many exercises most of the time. > > After having completed a certain number of exercises the > trainee would > > be directed to some kind of summary or statistics. > > > > How should I model this? > > > > As the response of the trainer ressource is meant to change > randomly > > for each request, should I POST to this ressource? > > Or should I GET it and redirect the client through a > location header? > > Or do something else that I'm missing? > The response is not meant to change randomly - you said "... > want me to complete". Until you've indicated that your state > is 'complete', then the response should remain unchanged (in > order to take into consideration the failure to retrieve the > initial response completely). > >
Hi, I've never built a REST app but am thinking of going that direction with the next major version of a content repository API used by my employer. There's a few things I'm not sure how to model, the main one being transactions. I'm hoping for some guidance on my path to grokking REST. APP is looking interesting, since most of what we need maps pretty well to member entries, feeds, and media resources. The introspection feature might be useful too. But what about this use case? A third party comes along and licenses 10,000 documents to us. There are lots of cross-links among those documents, so a requirement in the contract is that we must load the entire set successfully, or none at all. Leaving this up to the client is not an option. This is part of a SOA, and there's currently four clients. APP doesn't seem to say anything about transactions, so I'm assuming that it's up to me to find a more generally REST-y solution. The one obvious idea that occurs to me is to define two new resources, a transaction manager and a transaction resource. A transaction is really just a collection. To create a new transaction, POST to the transaction manager with a list of the resource URIs you want to modify. It initializes the transaction by copying those members, and returns the URI of your new transaction. GET the transaction resource to see a list of member URIs, as usual. PUT or DELETE any of those members as usual. POST to the transaction to add new members as usual. To abort, just DELETE the transaction. If this all sounds familiar, maybe it's reminiscent of: http://example.com/subversion/branches http://example.com/subversion/trunk But what does a "commit" look like? I don't see an appropriate verb other than PUT, so I see two options: 1) PUT the URI of the original collection to the transaction. This is a nice and simple message, but means the transaction has to be a little smarter than a normal collection. 2) PUT the URIs of the transaction and the original collection to the transaction manager. This would be interpreted like "merge the contents of URI 1 into URI 2". (I first thought of POSTing a special message to the T.M. to commit, but realized that would look too much like creating a transaction. It smells RPC. I figure any time I can't tell what's going on by looking at a simple access log, it's not really REST!) Either way, we should check mod times and if a conflict is detected, return an error. 409? How's that sound? Better ideas? P.S. this is my first post to the list. Hello! -- Paul Winkler http://www.slinkp.com
Paul Winkler <pw_lists@...> writes:
> (I first thought of POSTing a special message to the T.M. to commit,
> but realized that would look too much like creating a transaction. It
> smells RPC. I figure any time I can't tell what's going on by looking
> at a simple access log, it's not really REST!)
>
> Either way, we should check mod times and if a conflict is detected,
> return an error. 409?
>
> How's that sound? Better ideas?
We have discussed in the past:
POST /someresource
=> 201 /someresource/1
POST /someresource/1 { data }
=> 201
and again... and again... until you've added all the parts of the
transaction
GET /someresource
=> 200 { representaion of transaction data }
PUT /someresource { what you just GOT back }
=> 200
and transaction ended.
> P.S. this is my first post to the list. Hello!
Hello!
--
Nic Ferrier
http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Hi Paul,
On 02.02.2007, at 06:16, Paul Winkler wrote:
> Hi,
>
> I've never built a REST app but am thinking of going that direction
> with the next major version of a content repository API used by my
> employer.
You might also want to check out JSR 170[1]
>
> APP is looking interesting, since most of what we need maps pretty
> well to member entries, feeds, and media resources. The introspection
> feature might be useful too.
Yes, APP is indeed very interesting towards that end. I tend to see
APP/Atom to
provide for the machine-to-machine Web what HTML is for the machine-2-
human
web - it provides a set of semantics to get stuff done.
>
> But what about this use case? A third party comes along and licenses
> 10,000 documents to us. There are lots of cross-links among those
> documents, so a requirement in the contract is that we must load the
> entire set successfully, or none at all.
>
> Leaving this up to the client is not an option. This is part of a SOA,
> and there's currently four clients.
>
> APP doesn't seem to say anything about transactions, so I'm assuming
> that it's up to me to find a more generally REST-y solution.
Yes. The perfect thing to do this would be an APP extension.
>
> The one obvious idea that occurs to me is to define two new resources,
> a transaction manager and a transaction resource. A transaction is
> really just a collection. To create a new transaction, POST to the
> transaction manager with a list of the resource URIs you want to
> modify. It initializes the transaction by copying those members, and
> returns the URI of your new transaction.
>
> GET the transaction resource to see a list of member URIs, as usual.
> PUT or DELETE any of those members as usual. POST to the transaction
> to add new members as usual. To abort, just DELETE the transaction.
Hmm, IMHO that is close but not quite on the point. What you really
want is
to group together a bunch of requests; you want to tell the server
that the
10000 POSTs you are doing form a logical unit.
The way to generally do this with HTTP (scan the list archives for
this) is
to include a transaction idenifier (URI) in each request with a
request HTTP
header.
>
> But what does a "commit" look like? I don't see an appropriate verb
> other than PUT, so I see two options:
>
That is a missing piece, yes. You need a way to to tell the
transaction to
commit or rollback.
Here is my sketch how all this could be done with Atom:
- A server that provides transactionaliy declares in its service
document that
a) there is a transaction collection and b) what other collections
support transactions.
(You do this via James Snell's atompub-feature extension[2])
- POST to the transaction collection to create a new transaction; the
URI of the transaction
is the Location returned by the server in the 201 Created response.
- include that URI in all subsequent requests to the other collections
POST /documents
Transaction-URI: http://foo.org/transactions/3
[document1 of 10000]
Afterwards, PUT to the ransaction a representation that sets it to the
'commited' state. This could for example be done with an APP control
extension,
as in:
PUT /transactions/3
<entry>
<!-- .... -->
<app:control>
<tx:commit>
</app:control>
</entry>
Note that all this applies only to local transactions, managing
distributed transactions
would need to involve a transaction manager and all the 2PC
requirements. OTH, in
an environment that makes REST a good choice doing 2PC is usually the
thing *not* to
do - you'd need other forms of coordination (this is next on my list
of stuff to look at, so I
really cannot provide any guidance here now).
HTH,
Jan
[1]http://www.jcp.org/en/jsr/detail?id=170
[2]http://tools.ietf.org/id/draft-snell-atompub-feature-01.txt
> 1) PUT the URI of the original collection to the transaction. This is
> a nice and simple message, but means the transaction has to be a
> little smarter than a normal collection.
>
> 2) PUT the URIs of the transaction and the original collection to the
> transaction manager. This would be interpreted like "merge the
> contents of URI 1 into URI 2".
>
> (I first thought of POSTing a special message to the T.M. to commit,
> but realized that would look too much like creating a transaction. It
> smells RPC. I figure any time I can't tell what's going on by looking
> at a simple access log, it's not really REST!)
>
> Either way, we should check mod times and if a conflict is detected,
> return an error. 409?
>
> How's that sound? Better ideas?
>
>
> P.S. this is my first post to the list. Hello!
>
> --
>
> Paul Winkler
> http://www.slinkp.com
>
>
>
> Yahoo! Groups Links
>
>
>
Am 02.02.2007 um 06:23 schrieb S. Mike Dierken: > The response is not meant to change randomly - you said "... want > me to > complete". Until you've indicated that your state is 'complete', > then the > response should remain unchanged (in order to take into > consideration the > failure to retrieve the initial response completely). Hmm, yes. That's probably due to my bad English. What I've meant was that the trainer selects one of many exercises that the trainee is asked to take. An exercise may be as simple as a question with a multiple choice of answers. Where the random-ness (and thus, the dice) comes into play is where the trainer selects an exercise. Though this selection might involve further criteria at some time (e.g. critera based on the trainees learning process), in it's simplest form it is just a pure random selection. When the trainee gives an incorrect answer the chances of this exercise to be re-selected might be increased. Or not. I'm not yet completely sure about this (I'm thinking about some kind of simple flashcard system [1]). But I'm tending to a model where the order of the exercises is unpredictable / shuffles randomly. [1] http://en.wikipedia.org/wiki/Flashcard -- sven fuchs fon: +49 (58 45) 98 89 58 artweb design fax: +49 (58 45) 98 89 57 breite strae 65 www: http://www.artweb-design.de de-29468 bergen mail: svenfuchs@artweb-design.de
Thanks a lot, guys, for all the kind and quite illuminating responses! Am 01.02.2007 um 17:59 schrieb Alan Dean: > You could implement a dice throw as follows: > --> > GET /dice/throw > <-- > 302 Found > Location: /dice/4 I now have the feeling that I've asked for something pretty obvious. ;) Using curl I've seen that this even seems to be exactly the behavious that Ruby on Rails shows by default when doing a redirect. So as I'm going to use Ruby on Rails that's probably the way for me to go. Am 01.02.2007 um 17:59 schrieb Jon Hanna: > The "dice" can indeed be mapped as a redirect. It's perfectly okay to > map it as a redirect from a GET - that entities returned from > resources > change over time is one of the inherent features of REST and this can > just as easily be between every single request as it can over more > clearly marked times of modification. What initially irritated me (and still does so) was the consideration that GET has to be idempotent. I had the impression that redirecting to different locations would mean issuing entirely different responses (and thus violating GETs idempontence). I suppose I've been wrong on this? But what's the meaning of idempotence than in this case? > Hence a 307 Temporary Redirect from the "dice" resource would be > apposite. Note that by default 307 responses are not cacheable (which > suits your purposes) though there is nothing to stop the target of the > redirect being cached (which is a good thing, if chance does mean a > repeat on the dice there's no reason why the cache shouldn't be used). Do you also agree with the 302 Found response that Alan proposed? Looking at the HTTP spec, am I right that there's no difference between these both responses as far as GET requests are concerned? > If you need to guard against cheating then a different model would be > required beyond a purely simple 307. No, cheating is not a design consideration right now. Thanks again for answering! -- sven fuchs fon: +49 (58 45) 98 89 58 artweb design fax: +49 (58 45) 98 89 57 breite strae 65 www: http://www.artweb-design.de de-29468 bergen mail: svenfuchs@artweb-design.de
Alan Dean wrote:
>
>
> On 2/1/07, Sven Fuchs <svenfuchs@...
> <mailto:svenfuchs%40artweb-design.de>> wrote:
> >
> > This question could probably be reduced to "How do I RESTfully throw
> > a dice?"
>
> You could implement a dice throw as follows:
>
> -->
> GET /dice/throw
>
> <--
> 302 Found
> Location: /dice/4
>
> {representation}
To throw a dice, use POST. To find out what the number was, use GET. To
cheat, use PUT.
cheers
Bill
Am 01.02.2007 um 19:56 schrieb Mark Baker: > How about; > GET /trainer > -> > 200 Ok > Content-Type: text/uri-list > http://example.org/activity/9 Cool :) I've not been aware of this content-type. I (think I) understand how this kind of response exactly models what I've been asking for. But as I'm going to implement this as a web application (using a browser as a client) I right now think that a 302 Found/Location redirect should do it. -- sven fuchs fon: +49 (58 45) 98 89 58 artweb design fax: +49 (58 45) 98 89 57 breite strae 65 www: http://www.artweb-design.de de-29468 bergen mail: svenfuchs@...
Ok, but you needn't use text/uri-list. text/html would work as well (with lots more content, of course). The point is simply that you can treat the die/dice as a random data source and therefore a vanilla GET/200 exchange should suffice. On 2/2/07, Sven Fuchs <svenfuchs@...> wrote: > Am 01.02.2007 um 19:56 schrieb Mark Baker: > > How about; > > GET /trainer > > -> > > 200 Ok > > Content-Type: text/uri-list > > http://example.org/activity/9 > > Cool :) > > I've not been aware of this content-type. I (think I) understand how > this kind of response exactly models what I've been asking for. > > But as I'm going to implement this as a web application (using a > browser as a client) I right now think that a 302 Found/Location > redirect should do it. > > -- > sven fuchs fon: +49 (58 45) 98 89 58 > artweb design fax: +49 (58 45) 98 89 57 > breite strae 65 www: http://www.artweb-design.de > de-29468 bergen mail: svenfuchs@... > > > > > > Yahoo! Groups Links > > > > -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
On 2 Feb 2007, at 11:36, Bill de hOra wrote:
> To throw a dice, use POST.
POST /dice/throw
->
HTTP 1.1 201 Created
Location: /dice/throw/90802
Content-type: text/plain
6
> To find out what the number was, use GET.
GET /dice/throw
-> //an Atom feed of recent throws
GET /dice/throw/90802
-> //the value of an individual throw, as text, HTML, XML,
whatever the client asks for
> To cheat, use PUT.
run away!
Paul
--
http://blog.whatfettle.com
--
http://blog.whatfettle.com
Bill de hOra wrote: > To throw a dice, use POST. To find out what the number was, use GET. To > cheat, use PUT. For a user-throwable die, yes. For an automatic die were the user just sees the thrown result GET the uncacheable result.
On Fri, Feb 02, 2007 at 09:59:31AM +0100, Jan Algermissen wrote: > Here is my sketch how all this could be done with Atom: > > - A server that provides transactionaliy declares in its service > document that > a) there is a transaction collection and b) what other collections > support transactions. > (You do this via James Snell's atompub-feature extension[2]) That looks interesting, thanks! > - POST to the transaction collection to create a new transaction; the > URI of the transaction > is the Location returned by the server in the 201 Created response. > > - include that URI in all subsequent requests to the other collections > > POST /documents > Transaction-URI: http://foo.org/transactions/3 Okay. I hadn't looked into the idea of custom headers. Somehow I assumed that wasn't kosher, since it means the client is assuming non-standard capabilities on the server. I am still lacking in rest-fu. > Afterwards, PUT to the ransaction a representation that sets it to the > 'commited' state. This could for example be done with an APP control > extension, > as in: > > PUT /transactions/3 > > <entry> > <!-- .... --> > <app:control> > <tx:commit> > </app:control> > </entry> Aha, I hadn't read section 12 of the protocol draft. > Note that all this applies only to local transactions ... Yeah, distributed transactions seems like YAGNI for this project. Thanks! -- Paul Winkler http://www.slinkp.com
On Fri, Feb 02, 2007 at 08:55:55AM +0000, Nic James Ferrier wrote:
> Paul Winkler <pw_lists@...> writes:
>
> > (I first thought of POSTing a special message to the T.M. to commit,
> > but realized that would look too much like creating a transaction. It
> > smells RPC. I figure any time I can't tell what's going on by looking
> > at a simple access log, it's not really REST!)
> >
> > Either way, we should check mod times and if a conflict is detected,
> > return an error. 409?
> >
> > How's that sound? Better ideas?
>
> We have discussed in the past:
Yeah, I'm starting to find my way in the archives, lots of stuff to
catch up on...
> POST /someresource
> => 201 /someresource/1
>
> POST /someresource/1 { data }
> => 201
>
> and again... and again... until you've added all the parts of the
> transaction
>
> GET /someresource
> => 200 { representaion of transaction data }
Not sure what resource you meant here. Is that the right URI?
Also, if I understand you correctly, you're talking about transferring
the entire transaction state. That sounds fine for something small
like a typical online shopping cart, but for a content repository that
isn't going to scale. Think bulk loading. Potentially gigabytes of
data.
> PUT /someresource { what you just GOT back }
> => 200
>
> and transaction ended.
Ditto. This will only work for me if it's by reference, not by value.
Thanks for the input!
--
Paul Winkler
http://www.slinkp.com
Paul Winkler <pw_lists@...> writes:
>> GET /someresource
>> => 200 { representaion of transaction data }
>
> Not sure what resource you meant here. Is that the right URI?
Yes. You get a representation of the entire transaction state.
> Also, if I understand you correctly, you're talking about transferring
> the entire transaction state. That sounds fine for something small
> like a typical online shopping cart, but for a content repository that
> isn't going to scale. Think bulk loading. Potentially gigabytes of
> data.
Sure. But we're talking about a representation of the transaction
state. It doesn't have to show you the whole data set: maybe just
lists of URIs?
>> PUT /someresource { what you just GOT back }
>> => 200
>>
>> and transaction ended.
>
> Ditto. This will only work for me if it's by reference, not by
> value.
Ditto.
--
Nic Ferrier
http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On Fri, Feb 02, 2007 at 05:23:16PM +0000, Nic James Ferrier wrote: > Sure. But we're talking about a representation of the transaction > state. It doesn't have to show you the whole data set: maybe just > lists of URIs? Ah, I misread. Thanks. -- Paul Winkler http://www.slinkp.com
On Fri, Feb 02, 2007 at 09:59:31AM +0100, Jan Algermissen wrote: > You might also want to check out JSR 170[1] Thanks for the link, but AFAICT it's not really useful if neither the client or the server is Java ;-) Nearly everything here is either Ruby or Python nowadays. -- Paul Winkler http://www.slinkp.com
On 02.02.2007, at 18:01, Paul Winkler wrote: >> >> POST /documents >> Transaction-URI: http://foo.org/transactions/3 > > Okay. I hadn't looked into the idea of custom headers. Somehow I > assumed that wasn't kosher, Hmm, IMHO it is ok, since client and server share the semantics provided in the APP extension and if the extension defines a new HTTP header then they should both be aware of it. Others mustIgnore the header. Besides that, you could also include the Transaction URI in the Atom envelope (the extension could define an additional control element element)...well, except for DELETE requests. > since it means the client is assuming > non-standard capabilities on the server. Yes, it does. Bit it does so only because the server declared that it (well, the collection) supports a specific feature. It is this feature that is the *reason* for the client to pick that collection and not another one. Think of it as late binding of the client component to the server component based on capabilities. With SOAP/WSDL components late-bind based on the API, with HTTP/Atom they late-bind based on declared capabilities (or declared type if you want). Gues what is more flexible :-) (I'd also argue that the former involves the latter anyway and thus is just more work, unnecessary overhead, obsolete complexity....eventually wasted time and money). > I am still lacking in > rest-fu. What is 'rest-fu' ? Jan
Hi, regarding late-binding of components based on declared capabilities: It just occurred to me that this is a major difference between REST and messaging systems; in a messaging system, the connections between the components (data providers and data sinks) is static (configured) and cannot change dynamically. This is so, because it is not possible to say something about the sinks (to declare their capabilities) and providers would have no base to make a choice on. Just mumbling... Jan
> > You might also want to check out JSR 170[1] > >Thanks for the link, but AFAICT it's not really useful >if neither the client or the server is Java ;-) Be careful in discounting it just because of that. I've found reading through some JSRs to be very helpful even though I don't do Java. They can sometimes give you good design ideas. (I haven't looked at this particular one to know if it would in this situation or not...) Cheers, Harley Pebley www.skylark-software.com
Stikkit's new APIs look to be one of the more RESTful I've seen: http://stikkit.com/api Patrick Breitenbach PayPal
Not bad. The docs look shiny. Kind of odd that they would use PUT to 'toggle' the state of a resource. PUT is supposed to be idempotent and 'toggle' is the opposite of idempotent. Also, 'share/unshare' probably could be merged into 'sharing', where the content describes whether something is shared or not. I also think it's funny that they use a numerical bitmask for specifying multiple 'types'. The documentation doesn't describe the response headers much. Given that api_key can be omitted when using Authorization request header, the resource identifier becomes vague - for example "/stikkits?api_key=xyz" becomes "/stikkits". The response should indicate that the entity varies by the user, or the request could specify /whose/ stikkits are being talked about, and the authorization header would be used in part to determine whether the request should be allowed or not. On 2/2/07, pat_breitenbach <pat_breitenbach@...> wrote: > Stikkit's new APIs look to be one of the more RESTful I've seen: > http://stikkit.com/api > > Patrick Breitenbach > PayPal > > > > > Yahoo! Groups Links > > > >
On Tue, 2007-01-30 at 12:29 +0000, Jon Hanna wrote: > PUT? ->true -> (same as previous) > | > |false > | > KNOWN TO HAVE EXISTED PREVIOUSLY -> true -> 410 Gone > | > |false > | > 404 File Not Found I think that text should read more like: "Known that it will never exist again" and point to "410 Gone, or 404". If it might come back, 404 is still the right code. 404 is allowed in either case: "The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent." - rfc2616 Benjamin
On Thu, 2007-02-01 at 15:02 +0100, Sven Fuchs wrote: > I imagine a central ressource called "trainer" who's picking > exercises randomly. A request to the trainer would mean: "give me the > next exercise you want me to complete". The trainer responds with one > of many exercises most of the time. After having completed a certain > number of exercises the trainee would be directed to some kind of > summary or statistics. You have mentioned that in its simplest form this is throwing a die, however there seem to be more complex forms involved as well. The trainer may have to consider what has already been completed in order to make an appropriate decision. That means that the list of what has been completed has to be included explicitly or implicitly in the url. Explicitly (client keeps track of which are completed): http://example.com/thetrainer?completed=1+2+3+4 Implicitly (server keeps track of which are completed): http://example.com/thetrainer?student=foo (add authentication details here?) In the client-side case you would likely need to be rewriting hrefs in pages your return to the student to include the list of completed items. In the server-side case you would likely need the client to PUT or POST information about completed courses, possibly in the form of completed exam papers for example. Benjamin.
On 2/3/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > > On Tue, 2007-01-30 at 12:29 +0000, Jon Hanna wrote: > > PUT? ->true -> (same as previous) > > | > > |false > > | > > KNOWN TO HAVE EXISTED PREVIOUSLY -> true -> 410 Gone > > | > > |false > > | > > 404 File Not Found > > I think that text should read more like: > "Known that it will never exist again" and point to "410 Gone, or 404". > If it might come back, 404 is still the right code. 404 is allowed in > either case: > > "The requested resource is no longer available at the server and no > forwarding address is known. This condition is expected to be > considered permanent." - rfc2616 OK, fair point. It does however raise the question: "What should happen if a PUT is made to a resource where it is known that it will never exist again?" Alan
On Fri, 2007-02-02 at 00:16 -0500, Paul Winkler wrote:
> I've never built a REST app but am thinking of going that direction
> with the next major version of a content repository API used by my
> employer. There's a few things I'm not sure how to model, the main
> one being transactions. I'm hoping for some guidance on my path to
> grokking REST.
...
> But what about this use case? A third party comes along and licenses
> 10,000 documents to us. There are lots of cross-links among those
> documents, so a requirement in the contract is that we must load the
> entire set successfully, or none at all.
There are two basic approaches here:
1. Model a transaction over multiple small edits
2. Make a resource available that allows the user to operate on the
entire state they want to modify as one request
The former has various implementations, but is painful to implement.
Part of that pain is because it butts up against the "stateless between
requests" constraint of REST. You transmit a bit of what you want to do,
then another bit in another request, and then finally request that all
of the bits you requested actually happen. This may be necessary if the
client has to consult current state during the transaction, however
thinking of a REST service in the same way you think about a database is
usually a mistake.
My suggestion is to use a resource that allows them to do everything
they want as one operation. Without a more detailed explaination of what
the customer might be doing it is hard to be specific, but let's look at
an example where they want to publish a thousand atom articles:
>>>
POST http://example.com/newentries
<atom>
<entry>
entry 1
</entry>
<entry>
entry 2
</entry>
...
<entry>
entry 1000
</entry>
</atom>
<<<
200 OK
You can use a POE technique to ensure that this operation can be safely
retried if the client recieves no response.
What about updating existing entries? Perhaps something like this would
work:
PUT http://example.com/select?entries=http://example.com/0001
+http://example.com/0002...+http://example.com/1000
<atom>
<entry xml:base="http://example.com/0001">
entry 1
</entry>
<entry xml:base="http://example.com/0002">
entry 2
</entry>
...
<entry xml:base="http://example.com/1000">
entry 1000
</entry>
</atom>
Whether attributes to identify each entry being updated are needed or
whether the order can be determined by the query is probably up for
debate, as is whether or not xml:base has strong enough semantics to
indicate which entry is which. This approach ultimately boils down to
finding a url that represents all of the data you want to update,
preferrably by filling out a server-provided template or form to locate
the data.
In my opinion, online services of all kinds (From REST to WS-*) will
tend to perform, scale and behave better when a single request is a
single request. Once the request is received, it is the responsbility of
the service that recieves the request to ensure it is treated as an
atomic update.
Benjamin
On Sun, 2007-01-28 at 12:40 +1100, Mark Nottingham wrote:
> On 2007/01/27, at 4:32 PM, Benjamin Carlyle wrote:
> > On Sat, 2007-01-20 at 19:57 +0000, Bill de hOra wrote:
> >> Start here:
> >> http://www.mnot.net/drafts/draft-nottingham-http-poe-00.txt
> >> http://www.dehora.net/doc/httplr/draft-httplr-01.html
> > Problems with POE:
> > * The specification does not cover how the POE resource is created.
> That's intentional :)
> > Presumably, it is through a POST which could lead to a chicken and
> egg
> > situation.
> Sending a POST to get the form is one way, but not the only.
>
> Another would be to use GET, and assure that the response isn't
> cacheable; there are a number of ways to assure that the links you've
> given out don't collide without keeping a list of them, the easiest
> involving timestamps (along with some other information) or GUIDs.
...
Well, let's start with a base problem statement:
I have some state that I want to append to a resource. The right method
according to HTTP is POST, but if I don't get a response to my POST I
don't know whether or not to retry.
So here are the strategies I can think of seeing so far:
1. Have the user observe some property of the system to determine
whether to retry themselves. In SCADA this might be to observe a change
in voltage before deciding whether or not to retry a circuit-breaker
trip. This can be automated as another SCADA concept: "Target state
monitoring". Regardless of the reponse we recieved, did the resource
actually reach the state we intended?
2. Select and execute. This is another SCADA idea. You first prime the
resource for operation. Only primed resources can have an operation
perfomed on them, and they automatically unprime when that operation
takes place.
3. Create a channel to operate through. The channel is designed to block
duplicate requests in a way that the client can be sure means their
request went through. Both HTTPLR and POE roughly follow this approach,
I think.
4. Always model the append as the creation of a new resource. Make the
creation of the resource (like the creation of the channel) a safe
operation that consumes a little server-side state but otherwise as no
operational effect. Once the resource has been created at least once,
PUT the data you would have POSTed as many times as necessary to be sure
it has gone through.
5. Do TCP-like sequence numbering at the message level
I think the channel concept and the new resource concepts have similar
characteristics. The channel concept would seem to perform better
whenever we don't want to create new resource for the created state, and
may also handle channel teardown more gracefully. I think the new
resource approach is conceptually simpler, at least on the face of it.
Instead of a channel concept we just have the concept of the new
resource whose creation you requested.
POE tries to solve the problem of how to interact with the channel, and
leaves the channel creation process up to the implementation. Perhaps
this is a good thing. However you'll still need to deal with any new
resource you have created through a POST to the channel. Hrmmm... let's
write some pseudo-code:
Channel approach:
try
{
getChannel:
channel = RequestChannel(using POST or uncachable GET);
}
catch (NoResponse)
{
// old school psedo-code :)
GOTO getChannel;
}
catch (...)
{
// handle failure - request not processed, can resubmit
}
try
{
sendPOST:
resource = channel.POST(my content);
// save resource url away for later use, if any
}
catch (NoResponse)
{
GOTO sendPOST;
}
catch (...)
{
// failed, may be able to tell whether or not request was
// procssed if we only sent once
}
New Resource approach:
try
{
getResource:
resource = RequestResource(using POST or uncachable GET);
}
catch (NoResponse)
{
GOTO getResource;
}
catch (...)
{
// handle failure - data not submitted, can resubmit
}
try
{
sendPUT:
resource.PUT(my content);
}
catch (NoResponse)
{
GOTO sendPUT;
}
catch (...)
{
// failed, may be able to tell whether or not request was
// procssed if we only sent once. Can resubmit.
}
The main problems with both cases are where the channel or new resource
could time out before the client gives up on things. In a client-server
relationship the client is responsible for pushing the sequence through
to completion, and the server is just responsible for reasonably
fulfilling client requests.
If the server reclaims its channel or resource state leading to a failed
PUT, it isn't going to be possible to restart the request sequence
without risking duplicate sumbission again. The channel approach offers
an optimisation of allowing the client to DELETE the channel to free up
resources. This may mean that servers need to reap potentially-leaked
resources less agressively.
The resource-based approach also introduces a problem by aliasing the
channel and created resource. If the created resource has a short
lifetime, it may be equivalent to an early timeout for client purposes.
Say the resource's PUT succeeds, but the response associated with this
is lost. When the next PUT comes in it gets a 404. Was the data
submitted or not? Should the request to submit (say) an atom entry
processed, or should it be retried?
> I don't see how sending a POST to get the POE link leads to a
> chicken-
> and-egg problem, unless you also need the "get the form" operation
> (which really should be a GET) to be reliable as well. Even if you
> did, you can always bootstrap it with one that isn't required to be
> reliable.
I see your point. I think your approach is more similar to HTTPLR than I
originally surmised.
On Sat, 2007-01-27 at 15:28 +0000, Bill de hOra wrote:
> Benjamin Carlyle wrote:
> In light of this I prefer the
> > channel concept to simply be replaced by the concept of a created
> > resource.
> Maybe. Where you're worried about the reality of timeouts, I'm
> worried
> about the reality of HTTPLR acting as a gateway for MOMs.
So you would write your problem statement differently:
I have a message that I want to transmit to a likely unRESTful remote
system using a POST request. I don't want it to recieve the message
twice.
> > * I think there is some danger that the message could be read to be
> a
> > whole HTTP request or SOAP request or other request that needs to be
> > delivered. That interpretation doesn't smell right to me, and I
> think
> > that any suggestion of message transfer should be explicitly avoided
> in
> > favour of state transfer.
> Again, maybe. Did you see any testable/operational consequences?
Only that it may lead to a more complex system that might muddy the
waters a little. I think the different problem statements may infer
different solution spaces... however I'm not completely sure of this
line of thought as yet.
Benjamin.
On 2/3/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > So here are the strategies I can think of seeing so far: > 1. Have the user observe some property of the system to determine > whether to retry themselves. In SCADA this might be to observe a change > in voltage before deciding whether or not to retry a circuit-breaker > trip. This can be automated as another SCADA concept: "Target state > monitoring". Regardless of the reponse we recieved, did the resource > actually reach the state we intended? I've found this approach useful in the past, but with a caveat; if you look for a specific value then that would be an implementation dependency. A technique I've used once was to have the client send an HTTP header in the POST request which played a role sort of like a client-side etag with respect to the request body. The server, upon receiving the message and updating the state of the resource, would return another header containing a hash of the last days worth of tags (which wasn't many) on GET requests to that resource so that it could check if *its* update was applied. Interesting post... Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
On Sat, 2007-01-27 at 11:32 +0100, Jan Algermissen wrote: > The model I am thinking about to achieve POE is based on the use of > Atom and the inclusion of an ID in the POST body (or an HTTP header). > POE aware clients would receive the ID to use from a factory, non-POE > aware clients would just do the normal POST. One of my REST design instincts is that whenever I see an ID used as part of a message exchange pattern, I wonder why it isn't a URL. I think ids in headers have a tendancy to hide shared communication state instead of making it explicit. In this case I would prefer that the ID be the URL of the resource that the client needs to interact with next. When the client makes its request to the factory this urls should be returned in the Location header of the response. > Another thought I had was that the client could probably create the > ID itself (e.g. a tag: URI) and a new HTTP return code could indicate > to the client that the ID was't suitable (together with a good one in > the payload). Mark Baker has just suggested the use of a client-generated id, something like a client etag. The WS-* reliabile messaging specification also uses a client-generated ID, but this time it is a sequence number a-la TCP so that missed messages and message ordering can also be dealt with. I think the WS-* approach is interesting. I sometimes find myself wanting to be sure that requests are processed in the order in which they are sent. For example in SENA[1] I get a timing signal from the server to indicate something has changed. If these signals arrive too fast I might still have a previous GET outstanding. Should I send the new GET immediately, or wait until the existing one returns? If I send immediately the response will come back sooner, and I will be able to update the state of some soft-realtime data on my page. However under extreme conditions it is possible that my GET requests to the same URL could be processed out of order. It might even be possible for the response I recieve second to be older than the response I recieve first. If I update the user's screen with the second I could be misleading them. In practice, however, the use of sequence numbers can be pretty limiting to scalability. The REST constraint of "no shared communication state between requests" helps explain why. If I am up to sequence number 200 and the server fails over to an unrelated server, we won't be able to continue the conversation. We will have to reinitiate it from scratch. This already happens with HTTP/1.1 TCP/IP connections. It is usually necessary to tear down and reestablish the TCP/IP connection on server failover. It is probably impractical to try and ensure TCP-level failover even within a close-knit cluster. It is only slightly more practical to use a TCP-like technique at the coarser-grained message level. Certainly when clusters operate between physical sites it can become quite difficult. This suggests that reliable messaging that guarantees ordering of message processing is not achievable in the general case, though special environments may support it. There will always be chances for loss of communication state on failover. Even without failover it is an unfair assumption that a server will process requests from a single client in sequence. Proxies could reorder or alter the sequence, and simple threading models that perform parallelisation of processing will break the model. Ultimately, at most once delivery is probably as close to the ideal that we can reach in general. On Sat, 2007-02-03 at 23:24 -0500, Mark Baker wrote: > A technique I've used once was to have the client send an HTTP header > in the POST request which played a role sort of like a client-side > etag with respect to the request body. The server, upon receiving the > message and updating the state of the resource, would return another > header containing a hash of the last days worth of tags (which wasn't > many) on GET requests to that resource so that it could check if *its* > update was applied. The SCADA approach is more direct, but then again it is working on physical devices. You send a request for a transition. If all of the computer and networking devices are fine you'll get the equivalent of an OK back. However, all we know is that we kicked the servos into action. It is possible that the mechanical device itself hasn't moved. The device provides a read-back as to its actual state, and we configure a timeout by which it must reach that state. No resends occur as this is usually considered dangerous, however if the state doesn't match what we requested soon enough it is reported as an error to the user for correction. The possible unreliability of mechanical devices (even of their read-backs) has interesting effects on idempotency. We have what are known as "trip to trip" transitions and "close to close". In other words, everything we can see says that the device is in a particular state. However the user has determined from other evidence (ie by examining other resources) that we are being lied to. They send a request to put the device into the same state that it is currently in. We give the servos on the device another kick, and it usually fixes the problem. Interestingly, though, we would normally not retry an idempotent request like this automatically. Every time we kick those servos it reduces the lifetime of the mechanical device. A big circuit breaker might only have 50 trips in it before it needs maintenence. We would not normally issue any request to a device like that unless the user is directly at the helm requesting it. Benjamin. [1] http://soundadvice.id.au/blog/draft-carlyle-sena-01.txt
On 04.02.2007, at 09:55, Benjamin Carlyle wrote: > On Sat, 2007-01-27 at 11:32 +0100, Jan Algermissen wrote: >> The model I am thinking about to achieve POE is based on the use of >> Atom and the inclusion of an ID in the POST body (or an HTTP header). >> POE aware clients would receive the ID to use from a factory, non-POE >> aware clients would just do the normal POST. > > One of my REST design instincts is that whenever I see an ID used as > part of a message exchange pattern, I wonder why it isn't a URL. I > think > ids in headers have a tendancy to hide shared communication state > instead of making it explicit. In this case I would prefer that the ID > be the URL of the resource that the client needs to interact with > next. > When the client makes its request to the factory this urls should be > returned in the Location header of the response. Yes, exactly. Currently I am thinking about POST /poe-collection-factory 201 Created Location: /peo-collections/66525 Meaning the client requests the factory to create a poe collection (an APP collection that supports POST-retry) > >> Another thought I had was that the client could probably create the >> ID itself (e.g. a tag: URI) and a new HTTP return code could indicate >> to the client that the ID was't suitable (together with a good one in >> the payload). > > Mark Baker has just suggested the use of a client-generated id, > something like a client etag. The WS-* reliabile messaging > specification > also uses a client-generated ID, but this time it is a sequence number > a-la TCP so that missed messages and message ordering can also be > dealt > with. Yes, interesting. Have to think about it. Generally what I dislike about all ideas so far is the server side state, even if it is cheap. It sill affects scalability, runtime substitution of server components etc., basically all the good stuff about REST we tend to promote :-) > > I think the WS-* approach is interesting. I sometimes find myself > wanting to be sure that requests are processed in the order in which > they are sent. For example in SENA[1] I get a timing signal from the > server to indicate something has changed. If these signals arrive too > fast I might still have a previous GET outstanding. Should I send the > new GET immediately, or wait until the existing one returns? > > If I send immediately the response will come back sooner, and I > will be > able to update the state of some soft-realtime data on my page. > However > under extreme conditions it is possible that my GET requests to the > same > URL could be processed out of order. It might even be possible for the > response I recieve second to be older than the response I recieve > first. > If I update the user's screen with the second I could be misleading > them. > > In practice, however, the use of sequence numbers can be pretty > limiting > to scalability. The REST constraint of "no shared communication state > between requests" helps explain why. If I am up to sequence number 200 > and the server fails over to an unrelated server, we won't be able to > continue the conversation. We will have to reinitiate it from scratch. > This already happens with HTTP/1.1 TCP/IP connections. It is usually > necessary to tear down and reestablish the TCP/IP connection on server > failover. It is probably impractical to try and ensure TCP-level > failover even within a close-knit cluster. It is only slightly more > practical to use a TCP-like technique at the coarser-grained message > level. Certainly when clusters operate between physical sites it can > become quite difficult. Only scanned these paragraphs quickly, but it appears there is something communication state that should be represented as a resource, or? > > This suggests that reliable messaging that guarantees ordering of > message processing is not achievable in the general case, though > special > environments may support it. There will always be chances for loss of > communication state on failover. Even without failover it is an unfair > assumption that a server will process requests from a single client in > sequence. IMO, if you want sequence, you need an application that controls that sequence (via hypermedia). Coordination between processes is on a request-by-request basis in REST. If you want more complex coordination, you need application state (managed by some of the coordinated parties). > Proxies could reorder or alter the sequence, and simple > threading models that perform parallelisation of processing will break > the model. Ultimately, at most once delivery is probably as close > to the > ideal that we can reach in general. > > On Sat, 2007-02-03 at 23:24 -0500, Mark Baker wrote: >> A technique I've used once was to have the client send an HTTP header >> in the POST request which played a role sort of like a client-side >> etag with respect to the request body. The server, upon receiving >> the >> message and updating the state of the resource, would return another >> header containing a hash of the last days worth of tags (which wasn't >> many) on GET requests to that resource so that it could check if >> *its* >> update was applied. > > The SCADA approach is more direct, but then again it is working on > physical devices. You send a request for a transition. If all of the > computer and networking devices are fine you'll get the equivalent > of an > OK back. However, all we know is that we kicked the servos into > action. > It is possible that the mechanical device itself hasn't moved. The > device provides a read-back as to its actual state, and we configure a > timeout by which it must reach that state. No resends occur as this is > usually considered dangerous, however if the state doesn't match > what we > requested soon enough it is reported as an error to the user for > correction. > > The possible unreliability of mechanical devices (even of their > read-backs) has interesting effects on idempotency. We have what are > known as "trip to trip" transitions and "close to close". In other > words, everything we can see says that the device is in a particular > state. However the user has determined from other evidence (ie by > examining other resources) that we are being lied to. They send a > request to put the device into the same state that it is currently in. > We give the servos on the device another kick, and it usually fixes > the > problem. Interestingly, though, we would normally not retry an > idempotent request like this automatically. Every time we kick those > servos it reduces the lifetime of the mechanical device. A big circuit > breaker might only have 50 trips in it before it needs maintenence. We > would not normally issue any request to a device like that unless the > user is directly at the helm requesting it. > Unable to digest all this right now, but sounds very interesting. Jan > Benjamin. > [1] http://soundadvice.id.au/blog/draft-carlyle-sena-01.txt >
On 04.02.2007, at 05:24, Mark Baker wrote: > On 2/3/07, Benjamin Carlyle <benjamincarlyle@...> wrote: >> So here are the strategies I can think of seeing so far: >> 1. Have the user observe some property of the system to determine >> whether to retry themselves. [...] > > [...] > A technique I've used once was to have the client send an HTTP header > in the POST request which played a role sort of like a client-side > etag with respect to the request body. The server, upon receiving the > message and updating the state of the resource, would return another > header containing a hash of the last days worth of tags (which wasn't > many) on GET requests to that resource so that it could check if *its* > update was applied. Yes, I was thinking about taking the POST-and-GET-to-check route instead of a special interaction pattern. In the case of APP this would be even simpler, as all I'd need to be looking for as a client is the existence of the POSTed entry in the collection (though I'll have to do some more evaluation of how to actually check for identity[1]..ETag, Atom ID,...) One problem of this I see is laency. When could the client expect that its POST did not came through if it does not see the intended effect? What if the POST response (the one id did not receive) was 202 as opposed to 201? Jan [1] Mark, how did you ensure Client-ETag uniqueness across clients? > > Interesting post... > > Mark. > -- > Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca > Coactus; Web-inspired integration strategies http://www.coactus.com > > > > Yahoo! Groups Links > > >
> > "The requested resource is no longer available at the server and no > > forwarding address is known. This condition is expected to be > > considered permanent." - rfc2616 > > OK, fair point. It does however raise the question: "What should > happen if a PUT is made to a resource where it is known that it will > never exist again?" It's expected that it will not exist again, not guaranteed. Certainly if it's deemed very likely that something will be PUT there again it should be a 404 rather than a 410, but it's not erroneous for something to be PUT to a URI currently resulting in 410. The important consequence of this is that while recieving a 410 should result in any recorded information about a URI held other than on the server being deleted*, it should not be assumed that a reference to that URI found elsewhere is outdated as it could exist because of something PUT there (or merely put there by other means) later. So we should delete any such records, but not record them in any permanent "known to not exist" record. *Caches being the most obvious example, but 404s would also delete that data in most cases (some specialised caches of information, rather than pure web caches, may not) - a more interesting example would be a search engine's records for both search results and future spidering, a 404 could be a temporary and erroneous condition so that URI should be respidered later, but a 410 should drop the record from the respidering list - incidentally google seems to do this. -- Jon Hanna <http://www.hackcraft.net/> "�if it walks like a duck, and quacks like a duck, it's probably not a ConceptualWork about a duck." - Mark Baker
(resending this to the list, sorry for the dupe, Mark) Am 02.02.2007 um 13:30 schrieb Mark Baker: > Ok, but you needn't use text/uri-list. text/html would work as well > (with lots more content, of course). The point is simply that you can > treat the die/dice as a random data source and therefore a vanilla > GET/200 exchange should suffice. Yes :) But what's the advantage here? I'm probably not seeing it. I understand that this has the advantage of decoupeling things even further than a 302 Found response does in that it (the 200 Ok text/ uri-list) leaves the decision of calling the URI (or doing whatever else with it) completely to the client. So, I could see how I would want to use this approach in an (say) AJAX-based application. But, as I'm planning to go with plain HTML, the usual browsers etc. ... how would I use this? > On 2/2/07, Sven Fuchs <svenfuchs@artweb-design.de> wrote: >> Am 01.02.2007 um 19:56 schrieb Mark Baker: >>> How about; >>> GET /trainer >>> -> >>> 200 Ok >>> Content-Type: text/uri-list >>> http://example.org/activity/9 >> >> Cool :) >> >> I've not been aware of this content-type. I (think I) understand how >> this kind of response exactly models what I've been asking for. >> >> But as I'm going to implement this as a web application (using a >> browser as a client) I right now think that a 302 Found/Location >> redirect should do it. >> >> -- >> sven fuchs fon: +49 (58 45) 98 89 58 >> artweb design fax: +49 (58 45) 98 89 57 >> breite strae 65 www: http://www.artweb-design.de >> de-29468 bergen mail: svenfuchs@artweb-design.de >> >> >> >> >> >> Yahoo! Groups Links >> >> >> >> > > > -- > Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca > Coactus; Web-inspired integration strategies http://www.coactus.com > > > > Yahoo! Groups Links > > > -- sven fuchs fon: +49 (58 45) 98 89 58 artweb design fax: +49 (58 45) 98 89 57 breite strae 65 www: http://www.artweb-design.de de-29468 bergen mail: svenfuchs@...
(resending this to the list, sorry for the dupe, Benjamin) Am 03.02.2007 um 22:46 schrieb Benjamin Carlyle: > You have mentioned that in its simplest form this is throwing a die, > however there seem to be more complex forms involved as well. The > trainer may have to consider what has already been completed in > order to > make an appropriate decision. Yes. :) I've just tried to reduce the question to the minimum at this point. But of course you're right. The more complex behaviour of the trainer might even be pretty much predictable when one knows the exact algorithm hidden here, but at the same time might appear to be completely random to a user looking from the outside. As far as I understand REST deals with what is visible from the outside. > That means that the list of what has been > completed has to be included explicitly or implicitly in the url. > > Explicitly (client keeps track of which are completed): > http://example.com/thetrainer?completed=1+2+3+4 > > Implicitly (server keeps track of which are completed): > http://example.com/thetrainer?student=foo > (add authentication details here?) > > In the client-side case you would likely need to be rewriting hrefs in > pages your return to the student to include the list of completed > items. > In the server-side case you would likely need the client to PUT or > POST > information about completed courses, possibly in the form of completed > exam papers for example. Yes, absolutely. I've started implementing this having the server keeping track of the state. -- sven fuchs fon: +49 (58 45) 98 89 58 artweb design fax: +49 (58 45) 98 89 57 breite strae 65 www: http://www.artweb-design.de de-29468 bergen mail: svenfuchs@...
On Sun, Feb 04, 2007 at 10:55:43AM +1000, Benjamin Carlyle wrote: > My suggestion is to use a resource that allows them to do everything > they want as one operation. Without a more detailed explaination of what > the customer might be doing it is hard to be specific, but let's look at > an example where they want to publish a thousand atom articles: (snip) That's fine and dandy, but I'm more concerned about the example where they want to publish ten thousand atom media entities. (snip) > You can use a POE technique to ensure that this operation can be safely > retried if the client recieves no response. What is POE? > In my opinion, online services of all kinds (From REST to WS-*) will > tend to perform, scale and behave better when a single request is a > single request. Once the request is received, it is the responsbility of > the service that recieves the request to ensure it is treated as an > atomic update. Noted. Many of our requirements fit perfectly into the single-request model. It's just the bulk loading stuff that's a pain. Maybe I should provide as separate interface for that. -- Paul Winkler http://www.slinkp.com
Paul Winkler <pw_lists@...> writes: > Noted. Many of our requirements fit perfectly into the single-request > model. It's just the bulk loading stuff that's a pain. > Maybe I should provide as separate interface for that. Is it though? Surely you could publish a format requirement for a POSTed entity that allowed bulk uploading in one shot. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On Mon, Feb 05, 2007 at 06:54:20PM +0000, Nic James Ferrier wrote: > Paul Winkler <pw_lists@...> writes: > > > Noted. Many of our requirements fit perfectly into the single-request > > model. It's just the bulk loading stuff that's a pain. > > Maybe I should provide as separate interface for that. > > Is it though? Surely you could publish a format requirement for a > POSTed entity that allowed bulk uploading in one shot. Sure, but that's impractical when I have gigabytes of stuff to load in one "transaction", as mentioned previously in the thread. -- Paul Winkler http://www.slinkp.com
Paul Winkler <pw_lists@...> writes: > On Mon, Feb 05, 2007 at 06:54:20PM +0000, Nic James Ferrier wrote: >> Paul Winkler <pw_lists@...> writes: >> >> > Noted. Many of our requirements fit perfectly into the single-request >> > model. It's just the bulk loading stuff that's a pain. >> > Maybe I should provide as separate interface for that. >> >> Is it though? Surely you could publish a format requirement for a >> POSTed entity that allowed bulk uploading in one shot. > > Sure, but that's impractical when I have gigabytes of stuff to load in > one "transaction", as mentioned previously in the thread. Ok. But if you can't load it in a single request then.... -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On 2/4/07, Sven Fuchs <svenfuchs@...> wrote: > Am 02.02.2007 um 13:30 schrieb Mark Baker: > > Ok, but you needn't use text/uri-list. text/html would work as well > > (with lots more content, of course). The point is simply that you can > > treat the die/dice as a random data source and therefore a vanilla > > GET/200 exchange should suffice. > > Yes :) > > But what's the advantage here? I'm probably not seeing it. > > I understand that this has the advantage of decoupeling things even > further than a 302 Found response does in that it (the 200 Ok text/ > uri-list) leaves the decision of calling the URI (or doing whatever > else with it) completely to the client. It's more that the client doesn't request the roll as it does with the other approaches; it simply makes observations of the state of the dice on the server. > So, I could see how I would want to use this approach in an (say) > AJAX-based application. But, as I'm planning to go with plain HTML, > the usual browsers etc. ... how would I use this? Like this, only with more HTML, and (as I understand your needs), the random value would be turned into a link; http://www.random.org/cgi-bin/randnum?num=1&min=1&max=6 Mark.
On 2/4/07, Jan Algermissen <algermissen1971@...> wrote: > > On 04.02.2007, at 05:24, Mark Baker wrote: > > > On 2/3/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > >> So here are the strategies I can think of seeing so far: > >> 1. Have the user observe some property of the system to determine > >> whether to retry themselves. [...] > > > > [...] > > A technique I've used once was to have the client send an HTTP header > > in the POST request which played a role sort of like a client-side > > etag with respect to the request body. The server, upon receiving the > > message and updating the state of the resource, would return another > > header containing a hash of the last days worth of tags (which wasn't > > many) on GET requests to that resource so that it could check if *its* > > update was applied. > > Yes, I was thinking about taking the POST-and-GET-to-check route instead > of a special interaction pattern. In the case of APP this would be > even simpler, > as all I'd need to be looking for as a client is the existence of the > POSTed entry > in the collection (though I'll have to do some more evaluation of how > to actually > check for identity[1]..ETag, Atom ID,...) But that would be an implementation dependency, so your client wouldn't be reusable with, say, non-APP resources. My approach is generic. > > One problem of this I see is laency. When could the client expect > that its POST > did not came through if it does not see the intended effect? What if > the POST response > (the one id did not receive) was 202 as opposed to 201? Then it would have to keep checking or else a subscription would have to be made. > > Jan > > [1] Mark, how did you ensure Client-ETag uniqueness across clients? IIRC, it was something like a MIME Message-Id. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
On Mon, 2007-02-05 at 10:25 -0500, Paul Winkler wrote: > On Sun, Feb 04, 2007 at 10:55:43AM +1000, Benjamin Carlyle wrote: > > You can use a POE technique to ensure that this operation can be > safely > > retried if the client recieves no response. > What is POE? Sorry: Post Once Exactly. Techniques for this are currently being discussed in the POST at most once thread. > On Mon, 2007-02-05 at 14:05 -0500, Paul Winkler wrote: > On Mon, Feb 05, 2007 at 06:54:20PM +0000, Nic James Ferrier wrote: > > Paul Winkler <pw_lists@...> writes: > > > Noted. Many of our requirements fit perfectly into the > single-request > > > model. It's just the bulk loading stuff that's a pain. > > > Maybe I should provide as separate interface for that. > > Is it though? Surely you could publish a format requirement for a > > POSTed entity that allowed bulk uploading in one shot. > Sure, but that's impractical when I have gigabytes of stuff to load in > one "transaction", as mentioned previously in the thread. That would seem to be a problem with implementation technology, however I conceed that such limitations need to be worked around :) So let's look at this in stages: 1. Transfer state of new publications to server 2. Server acts on the publication as an atomic unit 3. Client is notified of transaction completion The simplest implementation of POST -> 200 OK is not available because the implementation technology would cause us to eat up all available memory just transferring the request. So we use 1a. Use any available file transfer mechanism from client to server 1b. Request the server use the transferred file as input 2. Server acts on publication as an atomic unit 3. Server replies with 200 OK, or sends a request back to the client saying effectively 200 OK. This leaves us with a number of problems still to solve. For example, how do we phrase the "load this data" request. It is most analagous to the WebDav COPY request. ie, copy the file resource we just gave you to the set of published nodes. You'll find varying opinions on the merit of COPY on this list. IIRC Fielding has noted in the past that HTTP is designed to operate on a single url, and that WebDav COPY and related operations have both theoretical and practical problems attached to them. The other alternative is to head in the direction you are already heading, which is unlikely to be worse than the file transfer + COPY approach. Perform the transaction piecemeal with POSTs to a transaction url or other requests that refer to the transaction url in a header, then POST a commit marker into the transaction resource. If the client really needs to make the update atomically and your implementation technology really doesn't permit the update to be performed as a single request, you have to head in this sort of direction. Just plan your error recovery carefully. There are a lot more ways that multiple requests can go wrong. Benjamin.
On Wed, Feb 07, 2007 at 07:24:18AM +1000, Benjamin Carlyle wrote: > .... Just plan your error recovery carefully. There are a lot more > ways that multiple requests can go wrong. Indeed! Thanks very much for the tips. -- Paul Winkler http://www.slinkp.com
Chris Dent wrote: > I wonder if it would also be useful to > implement .<filetype extension>? Just curious what specifically you were thinking... -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
Bill Venners wrote: > Perhaps I misunderstood you. I agree with your aesthetic > sense that paths are prettier than queries in URIs. But I > think that both path and queries are needed, so sometimes you > will have query parts. The question I was asking is which > form of embedding query params in URIs might they be the most > pretty and user friendly? > > http://www.artima.com/articles?o=a&t=java&p=7 > > Is the traditional way. But: > > http://www.artima.com/articles;a,tjava,p7 > > or > > http://www.artima.com/articles~a,tjava,p7 > > Could also be used in our architecture. I'm not sure that > they are much prettier than the traditional query form, but > the latter forms are shorter. Can you give some use cases where queries are *needed* (beyond one query parameter?) I'm not disagreeing, just wanting to see your use cases. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
"Mike Schinkel" <mikeschinkel@...> writes: > Can you give some use cases where queries are *needed* (beyond one query > parameter?) For unstructured search (like google's basic search), you pretty much gives only one kind of input. For structured search (like google's advance search), you need to 'type' each input. For example, if I supply "2006-10-20", I want to type it as the modification date, instead of a string somewhere in the body of a page. YS.
On Sat, 2007-02-03 at 23:24 -0500, Mark Baker wrote:
> On 2/3/07, Benjamin Carlyle <benjamincarlyle@...> wrote:
> > So here are the strategies I can think of seeing so far:
> > 1. Have the user observe some property of the system to determine
> > whether to retry themselves. In SCADA this might be to observe a
> change
> > in voltage before deciding whether or not to retry a circuit-breaker
> > trip. This can be automated as another SCADA concept: "Target state
> > monitoring". Regardless of the reponse we recieved, did the resource
> > actually reach the state we intended?
> A technique I've used once was to have the client send an HTTP header
> in the POST request which played a role sort of like a client-side
> etag with respect to the request body. The server, upon receiving the
> message and updating the state of the resource, would return another
> header containing a hash of the last days worth of tags (which wasn't
> many) on GET requests to that resource so that it could check if *its*
> update was applied.
I'm starting to like this approach. Let me have a go at rephrasing it as
a concrete proposal:
Problem statement: (same as before)
I have some state that I want to append to a resource. The right method
according to HTTP is POST, but if I don't get a response to my POST I
don't know whether or not to retry.
Client algorithm:
...
guid = generateGloballyUniqueID();
request.addHeader("Client-Etag",guid);
try
{
retryPOST:
startOrResetTimer(reasonable digest retention period, eg 2min);
factory.POST(request);
}
catch (NoResponse) // aka GatewayTimeout
{
etagDigest = factory.GET();
if (guid in etagDigest)
{
// Nothing to be done. The POST was successful.
}
else
{
// One of two possibilities exist. Either,
// * our POST didn't arrive, or
// * our etag has cycled out of the digest
// We try to ensure that the latter doesn't
// happen by giving up after a reasonable
// period.
goto retryPOST;
}
}
catch (RetentionPeriodTimeout)
{
// It is still possible that our etag would be in
// the digest at this point, so we could do a final
// GET. If we are in the digest, there is no problem.
// If we are not in the digest we can no longer assume
// that it is because our request didn't happen.
// Our request might have simply cycled out.
}
catch (...)
{
// Normal error handling
}
Server constraints:
* Client etags are stored in the factory as a digest of recent POST
requests for a reasonable amount of time
* Only successful requests have their etag stored in the digest, so
clients can still retry failed requests. Success would generally mean
that state was successfully appended to the server, though there may be
some corner cases.
Possible efficiency improvements:
* A URI template might allow the client to query for their specific
etag, but a protocol would have to be developed for this. Perhaps
instead of a digest, the factory could return this template. That would
also potentially deal with security issues arising from guids leaking
from one client to another.
Pros/Cons:
* In the normal case where the POST does not time out there is very
little extra communications overhead
* The server has to store the state of recent successful requests for a
period rather than the state of requests that did not go ahead. ie we
trade less communications overhead for more server state overhead. On
the other hand, this server state overhead should be proportional to the
amount of state the server allowed to be appended to itself as part of
the POST. It doesn't change the fundamental server-side state picture...
just changes the constant.
Cautions
* Under extreme conditions there could still be a race condition between
a POST arriving at the server and a GET request being issued to the
factory or template-derived url. This shouldn't really happen if the
client gives up on the POST under reasonable conditions. Those might
include "40s has passed", or "I'm using TCP/IP keepalive while requests
are outstanding to monitor our shared communication state, and the HA
cluster member I was talking to appears to have been replaced by its
backup, killing my connection". The final case of "I'm using TCP/IP
keepalive while requests are outstanding, and it simply timed out due to
network conditions" could still be a problem.
Benjamin
Yohanes Santoso wrote: > > Can you give some use cases where queries are *needed* (beyond one > > query > > parameter?) > > For unstructured search (like google's basic search), you > pretty much gives only one kind of input. > > For structured search (like google's advance search), you > need to 'type' each input. For example, if I supply > "2006-10-20", I want to type it as the modification date, > instead of a string somewhere in the body of a page. Ah. BTW, and this is a philosophy point, encoding into a single string for a search query IMO is probably the better way to go because then it is resilient to change. Splitting out into different parameters means the interface might need to change when you add new features. This is essentially the same argument for why the URL as a single string is better than an EPR[1]. Also, could you not encode date into the URL path? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..." [1] http://lists.w3.org/Archives/Public/www-tag/2006Dec/att-0038/EnterpriseWSTag .html
I'm building an internal system here at work and while I've been able
for the most part to model a RESTful interface for it, there's one
thing that I haven't been able to model quite successfully: batch
updates of resources.
I wouldn't be revealing too much to say that the system acts as a
single interface around the repository manipulation systems of the
various domain registrars we use, one of which is the IEDR, the Irish
Domain Registry. The IEDR, as with many other domain registries, use
EPP[1] or a variant thereof to allow resellers to manipulate their
repositories. The IEDR's own implementation extends EPP to implement
their MSD process. The MSD process allows resellers to suspend a
domain as a credit control measure for about 30 days. If a domain
which is in the MSD process is not taken out of it and unsuspended by
the time, the domain becomes available for registration again.
The problem here is that on any given day, a large number of domains
can be moved in and out of the MSD process, and this is generally done
in batches.
Currently, the cleanest way I can think of is to POST a collection of
domain representations to a special 'batch' endpoint, not entirely
unlike the way that GData works. But that doesn't feel quite right,
almost as if I'm abusing POST.
Has anybody else ever encountered a similar problem to this, and if
so, how did you go about solving it.
K.
[1] http://en.wikipedia.org/wiki/Extensible_Provisioning_Protocol
Take a read of the RFCs: it's like staring into hell.
Keith Gaughan wrote: > Currently, the cleanest way I can think of is to POST a collection of > domain representations to a special 'batch' endpoint, not entirely > unlike the way that GData works. But that doesn't feel quite right, > almost as if I'm abusing POST. Collection is a type of resource once you take an "everything is a resource" view. Representation of a collection is therefore a valid thing to POST. As such it doesn't seem like abusing POST to POST a representation of a collection to a resource that handles such collections. Quite the obvious approach in fact. Now, if it makes good conceptual sense to identify that batch with its own batch identifier (by which I mean, it makes sense within your model to do so, particularly if you might want to retrieve information about that batch at a later date) AND if it makes good sense for the client to determine the identifier, then you've got something where PUT also makes sense and is probably actually better. Otherwise I'd POST.
On Thu, 2007-02-08 at 17:17 +0000, Keith Gaughan wrote: > The problem here is that on any given day, a large number of domains > can be moved in and out of the MSD process, and this is generally done > in batches. > Currently, the cleanest way I can think of is to POST a collection of > domain representations to a special 'batch' endpoint, not entirely > unlike the way that GData works. But that doesn't feel quite right, > almost as if I'm abusing POST. My simple rule of thumb is this: * If you are replacing information, you are PUTting * If you are adding information, you are POSTing * If you are removing information, you are DELETE-ing * If you are retrieving information, you are GETting So now the question becomes: What is the url of the information you are replacing, adding, removing, or retrieving? For batch jobs determining this url can be hard. If you are finding it hard, perhaps REST is not the right tool for the job here... or perhaps you have couched the problem in terms that made sense for a non-REST system that now need to be rethought. For example, why are you doing these things in batches? Could your clients pipeline requests instead and transfer the same requests in the same amount of time as a batch request? Could they simply retry requests they didn't get a response on, instead of unnecessarily treating the batch like a transaction? Benjamin.
On 1/27/07, Bill de hOra <bill@...> wrote: > The problem in my mind GET a cached ID and sharing it with someone else. > If you want to serve IDs have clients use POST. Correct. If you're trying to solve this problem in a browser where only GET and POST are allowed then you can use that POST to create a new ID that is used in a subsequent POST. If you are outside HTML forms then since you're POSTing you mine as well create a new resource and return that via 201 and the Location: header. The client can then PUT a representation to that newly created resource, and since PUT is idempotent it can keep trying until it succeeds. -joe -- Joe Gregorio http://bitworking.org
On 2/7/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > I'm starting to like this approach. Let me have a go at rephrasing it as > a concrete proposal: > > Problem statement: (same as before) > I have some state that I want to append to a resource. The right method > according to HTTP is POST, but if I don't get a response to my POST I > don't know whether or not to retry. I'm not sure what you mean by "append", but this approach is useful for both POST(a) as well as POST(p). > > Client algorithm: > ... > guid = generateGloballyUniqueID(); That would be more of a message id, which is ok if you just want verifiable once-and-only-once. But you also have the option of making the tag value a function of the request representation, which would be useful in cases where the same representation might be sent in multiple messages. Your analysis looks pretty good. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
It had to happen sooner or later. The big boys are waking up and discovering REST, and naturally they want to protect all us little developers from worrying our pretty little heads about nasty things like HTTP and XML by creating easy-to-use REST frameworks: JSR-311 Java API for RESTful Web Services http://jcp.org/en/jsr/detail?id=311 Remember, these are the same jokers who gave us servlets and the URLConnection class as well as gems like JAX-RPC and JAX-WS. They still seem to believe that these are actually good specs, and they are proposing to tunnel REST services through JAX-WS (Java API for XML Web Services) endpoints. They also seem to believe that "building RESTful Web services using the Java Platform is significantly more complex than building SOAP-based services". I don't know that this is false, but if it's true it's only because Sun's HTTP API were designed by architecture astronauts who didn't actually understand HTTP. This proposal does not seem to be addressing the need for a decent HTTP API on either the client or server side that actually follows RESTful principles instead of fighting against them. To give you an idea of the background we're dealing with here, one of the two people who wrote the proposal "represents Sun on the W3C XML Protocol and W3C WS-Addressing working groups where he is co-editor of the SOAP 1.2 and WS-Addressing 1.0 specifications. Marc was co-specification lead for JAX-WS 2.0 (the Java API for Web Services) developed at the JCP and has also served as Sun's technical lead and alternate board member at the Web Services Interoperability Organization (WS-I)." The other submitter seems to be a primary instigator of the Fast Infoset effort to hide XML in binary goop. This is like asking Karl Rove and Dick Cheney to write the Democratic Party platform. Do we really want to trust these folks to define the official Java spec for REST? Please read the JSR, and send comments to jsr-311-comments@... I hope we can derail this completely, but we probably can't. If not, are there any JSR members here who might join the working group and bring some sanity and actual REST experience to the development of the eventual specification? If we can't stop it, maybe we can at least limit the damage. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
I was about to write something similar... In the first place I don't understand why this needs to be a JSR. I understand JSR as a means to standardize something existent into the Java platform. But there is no ecosystem of competing software-that-lends-to-the-REST-architectural-style-TM in Java in the first place. These guys should open a java.net project, gain some popularity and come back in two years. Alas, real damage will happen when someone tries to include this into J2SE. JSRs as such do not necessarily mean too much. Jerome Louvel is slated to go on the EG and - to my recollection - is the only one who's building a REST-oriented framework in Java and who I've seen active in this community. Jerome, maybe you can comment. Will send similar comments to that address. Thanks for bringing this up Matthias -- matthias.ernst@... software architect +49.40.32 55 87.503 > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Elliotte Harold > Sent: Wednesday, February 14, 2007 3:56 PM > To: REST Discuss > Subject: [rest-discuss] Sun proposes to apply Web service > standardization principles to REST > > It had to happen sooner or later. The big boys are waking up and > discovering REST, and naturally they want to protect all us little > developers from worrying our pretty little heads about nasty > things like > HTTP and XML by creating easy-to-use REST frameworks: > > JSR-311 Java API for RESTful Web Services > http://jcp.org/en/jsr/detail?id=311 > > Remember, these are the same jokers who gave us servlets and the > URLConnection class as well as gems like JAX-RPC and JAX-WS. > They still > seem to believe that these are actually good specs, and they are > proposing to tunnel REST services through JAX-WS (Java API > for XML Web > Services) endpoints. > > They also seem to believe that "building RESTful Web services > using the > Java Platform is significantly more complex than building SOAP-based > services". I don't know that this is false, but if it's true > it's only > because Sun's HTTP API were designed by architecture astronauts who > didn't actually understand HTTP. This proposal does not seem to be > addressing the need for a decent HTTP API on either the > client or server > side that actually follows RESTful principles instead of fighting > against them. > > To give you an idea of the background we're dealing with here, one of > the two people who wrote the proposal "represents Sun on the W3C XML > Protocol and W3C WS-Addressing working groups where he is > co-editor of > the SOAP 1.2 and WS-Addressing 1.0 specifications. Marc was > co-specification lead for JAX-WS 2.0 (the Java API for Web Services) > developed at the JCP and has also served as Sun's technical lead and > alternate board member at the Web Services Interoperability > Organization > (WS-I)." > > The other submitter seems to be a primary instigator of the > Fast Infoset > effort to hide XML in binary goop. > > This is like asking Karl Rove and Dick Cheney to write the Democratic > Party platform. > > Do we really want to trust these folks to define the official > Java spec > for REST? Please read the JSR, and send comments to > jsr-311-comments@... > > I hope we can derail this completely, but we probably can't. > If not, are > there any JSR members here who might join the working group and bring > some sanity and actual REST experience to the development of the > eventual specification? If we can't stop it, maybe we can at > least limit > the damage.
Elliotte, It seems to me approaching the effort with this attitude is not going to lead to a productive debate :-) RESTlet author Jrme Louvel is also a member of the initial expert group, and while I have serious doubts about the code generation aspect, Marc Hadley's WADL seems to indicate he "gets" REST. Regarding On Feb 14, 2007, at 3:56 PM, Elliotte Harold wrote: > > I hope we can derail this completely, but we probably can't. If > not, are > there any JSR members here who might join the working group and bring > some sanity and actual REST experience to the development of the > eventual specification? If we can't stop it, maybe we can at least > limit > the damage. I believe the JSR is the chance to get decent HTTP support into the JDK, so I'd strongly suggest influencing instead of stopping it. Best regards, Stefan
Hi Elliotte, Let me look at this effort on a more constructively light. You forgot to mention that Marc Hadley is also the author of WADL which is the best RESTful description language available. He may have been involved in other standardization efforts, but that doesn't mean he doesn't get REST now: I know he does. I was also invited to the initial expert group, as a founder of the Restlet framework (http://www.restlet.org). I would like to mention that this JSR aims to provide a standard set of annotations to facilitate the mapping between business objects and REST resources. The implementation could be supported by either Restlets, Servlets or JAX-WS (via the HTTP bindings). I expect the result of this JSR to be complimentary to the Restlet API, which hopefully looks RESTful enough to you. Let's not waste this opportunity to properly standardize REST on Java. See my recent posts for more details on my opinion: http://blog.noelios.com/2007/02/14/new-jsr-to-define-a-high-level-rest-api-for-java/ http://blog.noelios.com/2007/02/08/will-we-reconcile-rest-ws-and-soa/ Best regards, Jerome Louvel
Jerome Louvel wrote: > Let's not waste this opportunity to properly standardize REST on Java. What would one have to do to join the eg? cheers Bill
Hi Bill, You can apply for an Expert Group Nomination at: http://www.jcp.org/en/jsr/egnom?id=311 Thanks, Jerome
Elliotte Harold <elharo@...> writes: > Do we really want to trust these folks to define the official Java spec > for REST? Please read the JSR, and send comments to jsr-311-comments@... > > I hope we can derail this completely, but we probably can't. If not, are > there any JSR members here who might join the working group and bring > some sanity and actual REST experience to the development of the > eventual specification? If we can't stop it, maybe we can at least limit > the damage. This does seem a bit negative. I'm not sure I like REST-toolkits either... but only because REST is not a protocol or a platform, it's just a style. But what else to call such things? -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On 2/14/07, Jerome Louvel <contact@...> wrote: > > Hi Elliotte, > > Let me look at this effort on a more constructively light. You forgot to > mention that Marc Hadley is also the author of WADL which is the best > RESTful description language available. Which Roy and I have both pointed out (in effect) is an oxymoron... > He may have been involved in > other standardization efforts, but that doesn't mean he doesn't get REST > now: I know he does. I like Marc a lot. We worked together at Sun, and he's just a very pleasant fellow. But I think he misunderstands important aspects of REST and the Web, so I have some concern there. Anyhow, rather than be critical of this effort, I've decided to submit my application to join the EG. Java definitely needs better APIs for the Web. I am concerned about the request (section 2) though; it goes on and on about how existing APIs are too "low level" and require a lot of knowledge of HTTP, which suggests to me that they're thinking of hiding as much of HTTP as possible; an obvious "protocol independence" bias there which will only lead to a crappy API. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
J�r�me Louvel wrote: > > > Hi Bill, > > You can apply for an Expert Group Nomination at: > http://www.jcp.org/en/jsr/egnom?id=311 > <http://www.jcp.org/en/jsr/egnom?id=311> Thanks J�r�me. I've just submitted a nomination request. cheers Bill
I've exchanged emails with Marc Hadley, collected some reactions and published a news item here: http://www.infoq.com/news/2007/02/jsr-311-java-rest-api Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Feb 14, 2007, at 7:29 PM, Bill de hOra wrote: > Jrme Louvel wrote: > > > > > > Hi Bill, > > > > You can apply for an Expert Group Nomination at: > > http://www.jcp.org/en/jsr/egnom?id=311 > > <http://www.jcp.org/en/jsr/egnom?id=311> > > Thanks Jrme. I've just submitted a nomination request. > > cheers > Bill > > I just did the same. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Does anybody know how the expert groups work? Is there a public mailing list where non-expert-group-members can be heard, or does this all take place on some private mailing list? --Chuck On 2/14/07, Stefan Tilkov <stefan.tilkov@...> wrote: > On Feb 14, 2007, at 7:29 PM, Bill de hOra wrote: > > Jrme Louvel wrote: > > > > > > > > > Hi Bill, > > > > > > You can apply for an Expert Group Nomination at: > > > http://www.jcp.org/en/jsr/egnom?id=311 > > > <http://www.jcp.org/en/jsr/egnom?id=311> > > > > Thanks Jrme. I've just submitted a nomination request. > > > > cheers > > Bill > > > > > > I just did the same. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > >
Last time I participated on one, it was all private. There's a public review period though. On 2/14/07, Chuck Hinson <chuck.hinson@...> wrote: > Does anybody know how the expert groups work? Is there a public > mailing list where non-expert-group-members can be heard, or does this > all take place on some private mailing list?
I think there may be some missunderstandings about the direction of this JSR, I just posted a blog entry that aims to clarify a couple of points: http://weblogs.java.net/blog/mhadley/archive/2007/02/jsr_311_java_ap.html Regards, Marc. --- In rest-discuss@yahoogroups.com, Elliotte Harold <elharo@...> wrote: > > It had to happen sooner or later. The big boys are waking up and > discovering REST, and naturally they want to protect all us little > developers from worrying our pretty little heads about nasty things like > HTTP and XML by creating easy-to-use REST frameworks: > > JSR-311 Java API for RESTful Web Services > http://jcp.org/en/jsr/detail?id=311 > > Remember, these are the same jokers who gave us servlets and the > URLConnection class as well as gems like JAX-RPC and JAX-WS. They still > seem to believe that these are actually good specs, and they are > proposing to tunnel REST services through JAX-WS (Java API for XML Web > Services) endpoints. > > They also seem to believe that "building RESTful Web services using the > Java Platform is significantly more complex than building SOAP-based > services". I don't know that this is false, but if it's true it's only > because Sun's HTTP API were designed by architecture astronauts who > didn't actually understand HTTP. This proposal does not seem to be > addressing the need for a decent HTTP API on either the client or server > side that actually follows RESTful principles instead of fighting > against them. > > To give you an idea of the background we're dealing with here, one of > the two people who wrote the proposal "represents Sun on the W3C XML > Protocol and W3C WS-Addressing working groups where he is co-editor of > the SOAP 1.2 and WS-Addressing 1.0 specifications. Marc was > co-specification lead for JAX-WS 2.0 (the Java API for Web Services) > developed at the JCP and has also served as Sun's technical lead and > alternate board member at the Web Services Interoperability Organization > (WS-I)." > > The other submitter seems to be a primary instigator of the Fast Infoset > effort to hide XML in binary goop. > > This is like asking Karl Rove and Dick Cheney to write the Democratic > Party platform. > > Do we really want to trust these folks to define the official Java spec > for REST? Please read the JSR, and send comments to jsr-311-comments@... > > I hope we can derail this completely, but we probably can't. If not, are > there any JSR members here who might join the working group and bring > some sanity and actual REST experience to the development of the > eventual specification? If we can't stop it, maybe we can at least limit > the damage. > > > -- > Elliotte Rusty Harold elharo@... > Java I/O 2nd Edition Just Published! > http://www.cafeaulait.org/books/javaio2/ > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ >
Chuck Hinson wrote: > Does anybody know how the expert groups work? Is there a public > mailing list where non-expert-group-members can be heard, or does this > all take place on some private mailing list? > This is starting to change. A few are operating in public like JSR 305: http://jcp.org/en/jsr/detail?id=305 Most are still private with limited public review. I do not know which approach 311 plans to take. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 14 Feb 2007, at 15:56, Elliotte Harold wrote: > Do we really want to trust these folks to define the official Java > spec > for REST? Please read the JSR, and send comments to jsr-311- > comments@... You know, I think I trust Marc et all to come up with something worthwhile and put it in the hands of a lot of developers. What worries me most about REST is that it isn't testable. What allows you to say this site is "powered by REST" beyond passing the sniff of a bunch of self-appointed critics on this list. In this respect it's really no better than SOA. This kind of Email is one of the reasons why I tend to avoid using the R-word. Pirates. "Style". Pthrupt! Paul -- http://blog.whatfettle.com
Thanks, Marc. Good to get some light amidst the heat. :-) Importantly, it doesn't really seem like they (you?) are trying to "standardize REST", but merely provide a "convenience API" for Java programmers who want to implement RESTful web services. Right? -enp On Feb 14, 2007, at 1:34 PM, marc_hadley wrote: > I think there may be some missunderstandings about the direction of > this JSR, I just > posted a blog entry that aims to clarify a couple of points: > > http://weblogs.java.net/blog/mhadley/archive/2007/02/ > jsr_311_java_ap.html > > Regards, > Marc. >
On 2/14/07, Elliotte Harold <elharo@...> wrote: > > This is starting to change. A few are operating in public like JSR 305: > > http://jcp.org/en/jsr/detail?id=305 > > Most are still private with limited public review. I do not know which > approach 311 plans to take. I hope that the discussion is public. I am a .NET developer, so a Java implementation is not of primary interest to me, but I would certainly be interested to track the thinking as it works through the API design process to factor into my own. I wonder what will be taken as the starting point. Regards, Alan Dean
On 2/14/07, Paul Downey <paul.downey@...> wrote: > > What worries me most about REST is that it isn't testable. > What allows you to say this site is "powered by REST" > beyond passing the sniff of a bunch of self-appointed > critics on this list. > > In this respect it's really no better than SOA. I am trying to tackle exactly this issue. I certainly agree that as a general statement, REST isn't testable as it is an architectural style ... however ... I think that it is possible to make testable RESTful patterns. There will be more than one pattern, to be sure, but each pattern will be testable. I am working on EARL test / assertion representations of my own pattern and an engine to exercise EARL instances (hence the diagram I showed the group last month). Having this would permit a web application to claim a 'conformance level' to a specific pattern - just as HTML can claim various WCAG conformance levels. Alan Dean http://thoughtpad.net/who/alan-dean/news
Paul Downey <paul.downey@...> writes: > What worries me most about REST is that it isn't testable. > What allows you to say this site is "powered by REST" > beyond passing the sniff of a bunch of self-appointed > critics on this list. The reason you can't make REST testable is that REST isn't one thing. It's an architectural style. HTTP allows highly RESTfull systems. But an HTTP app can be more, or less, RESTfull. However, it's impossible IMHO to talk about *pure* REST. Some things RESTfull are in contradiction with others and the extent to which one is used over another will be context specific. Some things anti-RESTfull are perfectly acceptable to most developers and always will be. Cookies are about to get another lease of life from openid for example. Having said that I think if there was enough interest one could make a qualitative benchmarking system. I've dropped another post about that. > This kind of Email is one of the reasons why I tend to > avoid using the R-word. And that would be quite right except when discussing the properties of a particular web application/service as they pertain to it's scalability and maintainability. Which is what we do here of course. Roy has complained before about the term being hijacked. And I think it is slightly dangerous to talk about RESTfull toolkits and REST support. I am quite happy for people to say "this one thing is more RESTfull than this other thing". As long as it is of course. In conclusion I ask you this: is ML a functional language? Yes? Is ML a pure functional language? Is C a functional language? You take my point I'm sure: in the absence of another word REST is going to be misused for implementation specifics. > Pirates. "Style". Pthrupt! Watch it. I'll have you down that plank before you can polish your sabre. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
"Alan Dean" <alan.dean@...> writes: > On 2/14/07, Paul Downey <paul.downey@...> wrote: >> >> What worries me most about REST is that it isn't testable. >> What allows you to say this site is "powered by REST" >> beyond passing the sniff of a bunch of self-appointed >> critics on this list. >> >> In this respect it's really no better than SOA. > > I am trying to tackle exactly this issue. I certainly agree that as a > general statement, REST isn't testable as it is an architectural style > ... however ... I think that it is possible to make testable RESTful > patterns. There will be more than one pattern, to be sure, but each > pattern will be testable. > > I am working on EARL test / assertion representations of my own > pattern and an engine to exercise EARL instances (hence the diagram I > showed the group last month). Having this would permit a web > application to claim a 'conformance level' to a specific pattern - > just as HTML can claim various WCAG conformance levels. Oh! that's excellent. Just the sort of thing I was thinking of. Well done. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
So Paul would like to see something other than me going "aha! it's like the pirate code me hearties" and other people going "eurgh" and then there's an endless thread about pure or not so pure. I think one could easily produce a series of qualitative tests for HTTP based web services to establish their RESTfull goodness. One would have to come up with a list of attributes of RESTfull systems: - what's the content types? - does it support HEAD properly? - what happens when you do methods that aren't supported? etc... and then apply weights to each of them. Then write a client to test a webapp for each of the attributes (we might call them constraints /8-) and then produce a report. Anyone interested in doing this with me? -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On Feb 14, 2007, at 10:43 PM, Paul Downey wrote: > > On 14 Feb 2007, at 15:56, Elliotte Harold wrote: > > Do we really want to trust these folks to define the official Java > > spec > > for REST? Please read the JSR, and send comments to jsr-311- > > comments@... > You know, I think I trust Marc et all to come up with > something worthwhile and put it in the hands of a > lot of developers. > > What worries me most about REST is that it isn't testable. > What allows you to say this site is "powered by REST" > beyond passing the sniff of a bunch of self-appointed > critics on this list. > I agree ... > In this respect it's really no better than SOA. > ... and disagree. There is nothing even remotely close to a definitive work on SOA. This is different with REST - obviously, Roy's dissertation leaves room for interpretation, but it's still something else. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On 2/14/07, Nic James Ferrier <nferrier@...> wrote: > > I think one could easily produce a series of qualitative tests for > HTTP based web services to establish their RESTfull goodness. > > One would have to come up with a list of attributes of RESTfull > systems: > > - what's the content types? > - does it support HEAD properly? > - what happens when you do methods that aren't supported? etc... > > and then apply weights to each of them. > > Then write a client to test a webapp for each of the attributes (we > might call them constraints /8-) and then produce a report. > > Anyone interested in doing this with me? I am certainly interested in collaboration - both with other people's efforts and bringing the knowledge of others into my own efforts. Alan Dean http://thoughtpad.net/who/alan-dean/news
Stefan Tilkov wrote: > On Feb 14, 2007, at 7:29 PM, Bill de hOra wrote: >> J�r�me Louvel wrote: >> > >> > >> > Hi Bill, >> > >> > You can apply for an Expert Group Nomination at: >> > http://www.jcp.org/en/jsr/egnom?id=311 >> > <http://www.jcp.org/en/jsr/egnom?id=311> >> >> Thanks J�r�me. I've just submitted a nomination request. >> >> cheers >> Bill >> >> > > I just did the same. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > As did I. Pete
Paul Downey wrote: > > > > On 14 Feb 2007, at 15:56, Elliotte Harold wrote: > > Do we really want to trust these folks to define the official Java > > spec > > for REST? Please read the JSR, and send comments to jsr-311- > > comments@... <mailto:comments%40jcp.org> > You know, I think I trust Marc et all to come up with > something worthwhile and put it in the hands of a > lot of developers. > > What worries me most about REST is that it isn't testable. But it is explicit. It's difficult to say that about SOA. Then again is this really a useful way to speak - I could say "what worries me about design patterns is that they aren't testable" - but what would that mean? cheers Bill
Nic James Ferrier wrote: > Roy has complained before about the term being hijacked. That's either going to happen or we become the smug Lisp weenies of distributed systems. The question is are the REST community (us) and software designers who are prepared to work under the REST style "ready for success"? I'd say no. if you look at what Fowler has said about OO, that lots of strange things were done in its name, I think will happen around REST. It's part of being adopted. Then again, if it came down to it, good luck persuading anyone you know more about OO than him. > it is slightly dangerous to talk about RESTfull toolkits and REST > support. I am quite happy for people to say "this one thing is more > RESTfull than this other thing". As long as it is of course. "Where are the REST toolkits" is one of my favorite quotes of the last few years, partially because it's so right and so wrong. Perhaps it is the wrong way to think about things, but you could *never* persuade me that there is adequate technical support for building in the REST style today. Most frameworks are RPC/Gateway driven which results in bad hard to maintain web software - you develop in the REST style on most popular frameworks and you'll ship late. That's without even getting into arcana like rolling your own etags and figuring out where to leave them. cheers Bill
Bill de hOra <bill@...> writes: > That's either going to happen or we become the smug Lisp weenies of > distributed systems. The question is are the REST community (us) and > software designers who are prepared to work under the REST style "ready > for success"? I'd say no. Hey. I'm a smug lisp weenie. > "Where are the REST toolkits" is one of my favorite quotes of the last > few years, partially because it's so right and so wrong. Perhaps it is > the wrong way to think about things, but you could *never* persuade me > that there is adequate technical support for building in the REST style > today. Most frameworks are RPC/Gateway driven which results in bad hard > to maintain web software - you develop in the REST style on most popular > frameworks and you'll ship late. That's without even getting into arcana > like rolling your own etags and figuring out where to leave them. Certainly there are better HTTP toolkits that need to be written. I was doing some openid hacking the other day with python's urllib2 - supposedly quite a modern HTTP tool. Frankly I'd rather have slammed my fingers in a door. It was a deeply unpleasant experience. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Marc, I already explained to Rajiv last November that I would not allow Sun to go forward with the REST name in the API. It doesn't make any sense to name one API as the RESTful API for Java, and I simply cannot allow Sun to claim ownership of the name (which is what the JSR process does by design). Change the API name to something neutral, like JAX-RS. ....Roy
On 15 Feb 2007, at 00:08, Bill de hOra wrote: > > What worries me most about REST is that it isn't testable. > > But it is explicit. It's difficult to say that about SOA. yeah, I'm not speaking in favour of SOA, just that REST could be a little more concrete. OK not REST, but I'd like some more concrete to point at and advise people to emulate. > Then again is this really a useful way to speak - I could > say "what worries me about > design patterns is that they aren't testable" > - but what would that mean? That's fair comment, but if you use a framework (Hmm. is it a framework, dunno) such as JSR 311, then chances are you're going to expose your service RESTfully, and that can only be goodness. -- http://blog.whatfettle.com
Nic James Ferrier wrote: > I was doing some openid hacking the other day with python's urllib2 - > supposedly quite a modern HTTP tool. Frankly I'd rather have slammed > my fingers in a door. It was a deeply unpleasant experience. Try this one: http://bitworking.org/projects/httplib2/ cheers Bill
Bill de hOra <bill@...> writes: > Nic James Ferrier wrote: > >> I was doing some openid hacking the other day with python's urllib2 - >> supposedly quite a modern HTTP tool. Frankly I'd rather have slammed >> my fingers in a door. It was a deeply unpleasant experience. > > Try this one: > > http://bitworking.org/projects/httplib2/ Yeah... it's nice. But the fact that there are good frameworks out there wasn't my point. urllib2 comes with python, a language that supposedly gets the web. And urllib2 really sucks. Similarly, let's hear it for HttpURLConnection. Java, that famous Internet language, has a really bad http client library. And these are just the clients. Really, when I want to do something client side I tend to use curl. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On Feb 14, 2007, at 6:36 PM, Roy T. Fielding wrote: > Marc, I already explained to Rajiv last November that I would not > allow Sun to go forward with the REST name in the API. It doesn't > make any sense to name one API as the RESTful API for Java, and I > simply cannot allow Sun to claim ownership of the name (which is > what the JSR process does by design). Change the API name to > something neutral, like JAX-RS. > Roy, I think we may have gotten our wires crossed somewhere, when you discussed this with Rajiv last November you certainly didn't rule out any name with REST in it, e.g. here is a quote from an email you sent on Nov 30th in response to an email from Rajiv exploring names that would be OK with you. >> Java API for XML programming in the REST Style - JAX-RS. >> > > Sure, that's a fine name, especially if you intend it to be limited > to XML messages. The name we adopted is "Java API for RESTful Web Services" since the API won't be XML specific. This isn't very different from the name you approved of earlier. Marc. --- Marc Hadley <marc.hadley at sun.com> CTO Office, Sun Microsystems.
On 2/14/07, Paul Downey <paul.downey@...> wrote: > What worries me most about REST is that it isn't testable. Eeek! REST is most definitely testable. See sec 5 of Roy's dissertation for the tests (constraints). I agree that calling a Java API "RESTful" is a misnomer though. If it was described as "supporting RESTful application development", that would be better. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
On 2/14/07, Nic James Ferrier <nferrier@...> wrote: > I was doing some openid hacking the other day with python's urllib2 - > supposedly quite a modern HTTP tool. Frankly I'd rather have slammed > my fingers in a door. It was a deeply unpleasant experience. In a shameless bit of self-promotion, you should really be using httplib2: http://bitworking.org/projects/httplib2/ -joe -- Joe Gregorio http://bitworking.org
On 2/14/07, Jerome Louvel <contact@...> wrote: > > Let's not waste this opportunity to properly standardize REST on Java. Does the Expert Group include anyone who has written such an API already? For example, Jetty has had something similar for years. Also, you are standardizing a server-side HTTP API, not REST. From what I have seen so far, it looks like something I would use, but please rename it. -- Robert Sayre <http://blog.mozilla.com/rob-sayre/> "I would have written a shorter letter, but I did not have the time."
On 2/14/07, Elliotte Harold <elharo@...> wrote: > This is like asking Karl Rove and Dick Cheney to write the Democratic > Party platform. Ouch. That's unnecessarily sharp. I've worked with Marc on the URI Template spec and he's smart, easy to work with, and in no way deserves this kind of personal attack. > I hope we can derail this completely, but we probably can't. If not, are > there any JSR members here who might join the working group and bring > some sanity and actual REST experience to the development of the > eventual specification? If we can't stop it, maybe we can at least limit > the damage. I *do* have my concerns about the process and the end results of a JSR with the term REST attached to it, only because I can imagine a near future where the "definition" of REST to the average Java programmer is anything you can build with a JSR 311 library, and the converse, that anything you can't build with a JSR 311 library is not RESTful. -joe -- Joe Gregorio http://bitworking.org
Nic James Ferrier wrote: > urllib2 comes with python, a language that supposedly > gets the web. And urllib2 really sucks. What's wrong with it? (he asks after just making the decision to focus the vast majority of his next language learning time on python...) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
On 14.02.2007, at 19:32, Stefan Tilkov wrote: > I've exchanged emails with Marc Hadley, collected some reactions > and published a news item here: > > http://www.infoq.com/news/2007/02/jsr-311-java-rest-api Stefan, good overview, thanks. Also gives me some hope :-) I think the JSR provides a good opportunity to discuss REST principles in a practical/implementation context. I expect to see a lot of "Naa...you cannot do that because..." kinds of discussions with associated AHA-Moments - well, if the group accepts my application. Jan > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > >
"Mike Schinkel" <mikeschinkel@...> writes:
> Nic James Ferrier wrote:
>> urllib2 comes with python, a language that supposedly
>> gets the web. And urllib2 really sucks.
>
> What's wrong with it?
It's very complex and not as configurable as it should be.
I have been doing something interesting with SSL lately and was using
Python. Python's default socket implementation is broken for client
certificates (it assumes some mad defaults). Consequently urllib2 is
broken as well.
And here is what I had to do to get a working urllib2:
# HTTPS client based on OpenSSL
import urllib2
import httplib
import socket
import OpenSSL
import traceback
class MyFakeSocket(httplib.FakeSocket):
def send(self, *args):
return self._ssl.send(*args)
def sendall(self, *args):
return self._ssl.sendall(*args)
def recv(self, *args):
return self._ssl.recv(*args)
class MyHTTPSConnection(httplib.HTTPConnection):
"This class allows communication via SSL."
default_port = httplib.HTTPS_PORT
def __init__(self, host, port=None):
httplib.HTTPConnection.__init__(self, host, port)
def connect(self):
"Connect to a host on a given (SSL) port."
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((self.host, self.port))
ctx = OpenSSL.SSL.Context(OpenSSL.SSL.SSLv3_METHOD)
ssl = OpenSSL.SSL.Connection(ctx, sock)
ssl.connect_ex((self.host, self.port))
self.sock = MyFakeSocket(sock, ssl)
class MyHTTPSHandler(urllib2.HTTPSHandler):
def https_open(self, req):
return self.do_open(MyHTTPSConnection, req)
def init():
try:
https_handler = MyHTTPSHandler()
od = urllib2.build_opener(https_handler)
urllib2.install_opener(od)
except Exception, e:
traceback.print_tb(sys.exc_info()[2])
print str(e)
Unfortunately, in order to find out how to do that I had to read
through the code. I'll blog this somewhere soon so that the python
universe is improved a little bit but I think I'd advise using the
kitkeeper library if you're doing any http hacking.
> (he asks after just making the decision to focus the
> vast majority of his next language learning time on python...)
I'd applaud that. Python has some really nice points (and some really
bad points: like no declaration of variables). But it's HTTP client
really sucks.
--
Nic Ferrier
http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Joe Gregorio wrote: > On 2/14/07, Nic James Ferrier <nferrier@...> wrote: >> I was doing some openid hacking the other day with python's urllib2 - >> supposedly quite a modern HTTP tool. Frankly I'd rather have slammed >> my fingers in a door. It was a deeply unpleasant experience. > > In a shameless bit of self-promotion, you should > really be using httplib2: > > http://bitworking.org/projects/httplib2/ Any chance of getting httplib2 into the standard library Joe? cheers Bill
There's an assumption so far in this thread that the 311 proposal is purely a server side technology. However, I can find nothing in the JSR to explicitly indicate this or contradict it. (The sheer vagueness of the proposal is a major problem, and a big reason why this should be done as an open source project *outside* the JCP first, before standardization. If that were the case I'd have many fewer concerns with it.) However reading between the lines I see some evidence that this plays both sides of the fence. I think there's an assumption that tools will be used on the one side to consume what the server side produces. For instance they might share a WADL file. If so, this would weaken or break the client independence of HTTP. We'd be back in WS-Hell where theoretically independent services can only be consumed by clients written with the same framework. One of two things needs to happen here: 1. The JSR is rewritten to make it clear that this is purely a server-side proposal and that no client side technology will be specced. 2. If the goal is to develop both client and server APIs, then this needs to be split into two independent JSRs, with separate spec leads and expert groups. Furthermore there should be a wall between the two efforts. They must not act as if they know what the other is doing. The client group should design for nothing but a generic HTTP server, and the server group should design for nothing more than a generic HTTP client. Neither may depend on any functionality of the other. For example, a server side Representation class should not share code or interface with a client Representation class. Doing that risks introducing unintended dependencies and tightly couple 311 clients to 311 servers. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
[ Attachment content not displayed ]
Hi All, Guess this is the right forum to post this question. I am supposed to make a shopping cart solution. Though I have options like osCommerece and ZenCart I would like to go for something which has exposed itself as web service ( preferably RESTful, why not ? :) ). That will help me develop my client ( Ajax based with jQuery or some other JavaScript framework ) without old, throw-all-content-to-user shopping cart pages. I tried to search on net but in vain. Infact have tried to search module for osCommerce ( anything for that matter ) which gives me all basic e-commerce feature as web service, but couldn't find one. Have anybody come across piece of software which can help me ? Thanks in advance, Samyak
Hi Elliotte, In my mind this JSR is only aimed at server-side applications. However, if you take the Restlet API, it is both a server-side and client-side API and we see this as a strong advantage compared existing solutions requiring you to learn two different APIs (Servlet API + HttpURLConnection for example). If you take the case of a reverse proxy, our Representation class can be retrieved by a client HTTP connector from a remote origin server and then directly forwarded to the original client via the server HTTP connector. Could you explain what is wrong in ensuring consistency (where it makes senses) between both sides? IMO, more an more Web applications can't be strictly categorized as client-side or server-side (ex: mash-ups) so this wall that you want to raise seems unnecessary. Regards, Jerome
Jérôme Louvel wrote: > Could you explain what is wrong in ensuring consistency (where it > makes senses) between both sides? IMO, more an more Web applications > can't be strictly categorized as client-side or server-side (ex: > mash-ups) so this wall that you want to raise seems unnecessary. One goal of both WS-* and REST is that clients written in a multitude of languages on a multitude of platforms will be able to talk to each other interoperably. The server does not care what language the client is written in and vice versa. At least that's the theory. In practice WS-* doesn't actually interoperate all that well. .NET servers work with .NET clients and Java servers work with Java clients (if they use the same basic framework) but just try getting .NET servers to talk to Java clients or vice versa. You can do it, but it takes lots of debugging and detailed knowledge of which pieces of which specs are and are not reliable between platforms. Consequently most intranet services are platform-locked and most Internet services aren't used all that much. Even when interoperability is the goal, lots of little assumptions tend to get baked into the libraries that prevent other libraries from communicating. Sharing classes between the client and server is a real code smell. Consider a Representation class, for example. The server sends a representation and the client receives it, so it should be appropriate for them to use the same class to model it, right? Actually no. The client and server have different views of this object. For instance the server is probably going to want some sort of pointer back to the actual resource from which the representation is derived while the client should not see any such pointer. Furthermore, the client needs a way to get a byte array or InputStream for the message body in the Representation. The server needs a way to set the byte array or get an OutputStream for the message body. If you try to get away with one Representation class, then each side will have methods it doesn't need. The only way to get the ideal client API is to design just a client API. The only way to get the best serve side API is to design just a server side API. If REST is done right, then we shouldn't need to coordinate the two design efforts. The interfaces between the two should be limited to HTTP. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Elliotte, Interoperability between clients and servers at the wire level is different from consistency between client-side and server-side API design, especially when several implementations of the same API are expected. In our case (RESTful Web services API), the common denominator should be all of HTTP 1.1 and just HTTP 1.1. I don't think it would be wise to include any kind of object serialization to/from XML in such API if that is your concern. We don't want to fall into WS-* interop issues as you correctly point. > Consider a Representation class, for example. The server sends a > representation and the client receives it, so it should be appropriate > for them to use the same class to model it, right? Actually no. The > client and server have different views of this object. For instance the > server is probably going to want some sort of pointer back to the actual > resource from which the representation is derived while the client > should not see any such pointer. Talking about the Restlet's Representation class, it doesn't contain any reference to the parent Resource, so that is not an issue for us. We never had an user complaining about this design choice. > Furthermore, the client needs a way to get a byte array or InputStream for the message body in the Representation. The server needs a way to set the byte array or get an > OutputStream for the message body. If you try to get away with one > Representation class, then each side will have methods it doesn't need. In our case, we designed the Representation class like a Content (or like an HTTP entity if you prefer) that can be either written to an OutputStream (or to an NIO WritableByteChannel) or read by getting an InputStream (or an NIO ReadableByteChannel). We have several abstract implementations that can let you automatically convert from one IO mode to another. Have a look at this hierarchy diagram: http://www.restlet.org/tutorial#conclusion Check also the Javadocs of the org.restlet.resource package: http://www.restlet.org/docs/api/org/restlet/resource/package-summary.html Best regards, Jerome
On 2/15/07, Elliotte Harold <elharo@...> wrote: > The only way to get the ideal client API is to design just a client API. > The only way to get the best serve side API is to design just a server > side API. If REST is done right, then we shouldn't need to coordinate > the two design efforts. The interfaces between the two should be limited > to HTTP. I agree. Moreover, I also think you generally have different design goals with client and server APIs. For example performance is normally a lot more important for servers than clients. Also, a simple API might be more desirable on the client than on the server (again, because of performance). For these reasons it will likely do more harm than good to try to use server abstractions in a client API or vice-versa. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Hi Steve, You raise some interesting points, detailed discussion of which really belong in the expert group once it is formed. For now I'll take a shot at some of the higher level questions, answers interspersed below. On Feb 15, 2007, at 6:10 AM, Steve Loughran wrote: > > o Why not just have an interface with the well known verbs on? I know > Java5 makes annotations possible, but it is not clear they are always > appropriate. Maybe with WS-*, where your endpoints want to marshall > arbitrary java graphs across a broad set of verbs, but with HTTP you > have the fairly constrained verb set of Http+DAV, > IMO, an interface locks down the method signatures too tightly. Look at the example on my blog where I show a URI parameter being injected as a method parameter, you can do same kind of thing with query parameters, matrix parameters or even HTTP headers but to do that you need to leave the method signatures flexible. > o It should be the option of the of previous elements in the directory > path to dynamically determine which resource classes should receive > the next part of the path. That is, not merely a simple regexp mapping > of url->resource, but something that multiple components can > dynamically resolve. > > o Is this intended to be an adjunct to Java EE, something to export > session beans over HTTP, or something svelte that runs atop something > minimal like jetty? > Its intended to run atop a variety of HTTP servers, I'd like to see Jetty supported. We have some ideas for a container SPI that should allow an impl to be hosted on top of any container that can make the request information available and write a response. > o Is this going to be the place where the servlet API gets looked at, > for the first time in a decade? I recall discussing with the Java EE > spec leads the absence of any annotations to make writing servlets as > easy as it is to do stateful Java EE session beans; for some reason > you dont need to edit any XML files to create a WS epr bound to a > session bean, yet to write your own servlet you need to code web.xml > files or play with XDoclet. > This isn't intended to replace Servlet, deployment within a servlet container is one of the options called out explicitly in the JSR. > o Is JAX-RS going to focus on O/X mapping as the best way to work with > XML, with JAXB 2.0 as the mapping system? > No, though it should be easy to plug in JAXB if you want to use it. We have some ideas for a representation SPI that should allow you to plug in support for a variety of serialization/deserialization technologies. > o Is the server-side metaphor still going to be something with the > flavour of RPC? OR something that makes effective use of the NIO and > concurrency libraries > The server side metaphor should be that of a uniform interface. Sync vs async is a very good question and one I think the expert group needs to spend some cycles on. > o What's the fault model going to be? What happens if a method throws > an exception? > It gets converted into an appropriate HTTP status code. > o What public repository is the TCK test suite going to be hosted in > (apache, java.net, google, sourceforge?), and will you be using JUnit > or TestNG with HttpUnit/XmlUnit? > All TBD. > I am seriously debating joining this group, perhaps even as an > representative of an organisation. However I am concerned not just > about the basic architecture, but about the test process. > > I am currently finalising the interop result documents for an > OASIS-based standard, one built on the house of cards that is WS-*, > and have come to some strong opinions about how test-driven standards > can be, in contrast to things like WS-A where testability was clearly > an afterthought (see attached document). While I celebrate the fact > that the JCP, unlike the W3C or OASIS, has a test-centric process, I > have seen how hard it is for OSS projects to even get access to the > TCK to products such as JAX-WS, creating a barrier to testing and > redistribution. A public TCK that could be checked out a build under > Gump would let downstream implementations also build and test nightly, > and it would let the implementors add more test cases to the TCK as > they encountered problems or ambiguity. > > Returning to the architecture. Anything that tries to bring the same > ease of use of building WS-* to the REST world has missed something > obvious. While JAX-WS may make it easy to export a java method to the > rest of the world, it does nothing to ensure the world can talk to it. > If there are things from the Java world we ought to borrow, it could > be > > -the option of generating structured faults, be they SOAPFaults or > XHTML with machine parseable content. Not using the "transparent" > marshalling of JAX-WS, but something more like I describe in my M32 > paper; having an interface that faults can implement if they want to > generate their own XML/XHTML > Sounds like a nice idea. I wouldn't tie it XML though, an endpoint might want to support other formats like JSON too. > -a handler chain for incoming content instead of relying on what is > built in to the system. There is already much of this built in to > Apache Tomcat, of course. > I have mixed feelings about handler chains, as you note that sort of functionality is already built into a variety of containers. > -a programmatic way to configure things. Right now the normal servlet > API is purely declarative, except for system-specific APIs. I need the > right to add an remove resource mappings dynamically. > Also an interesting idea. Regards, Marc. --- Marc Hadley <marc.hadley at sun.com> CTO Office, Sun Microsystems.
Paul Downey wrote: > On 15 Feb 2007, at 00:08, Bill de hOra wrote: >>> What worries me most about REST is that it isn't testable. >> But it is explicit. It's difficult to say that about SOA. > > yeah, I'm not speaking in favour of SOA, just that REST > could be a little more concrete. OK not REST, but I'd > like some more concrete to point at and advise people > to emulate. Build a RESTful standard. Build tests for it. You've now got testable REST and the rest of us can choose to use it or not. >> Then again is this really a useful way to speak - I could >> say "what worries me about >> design patterns is that they aren't testable" >> - but what would that mean? > > > That's fair comment, but if you use a framework (Hmm. is it a > framework, dunno) such as JSR 311, then chances > are you're going to expose your service RESTfully, and > that can only be goodness. Depends on how much freedom it gives you. You could say we've a (fairly) RESTful (and testable) framework in RFC 2616 and the various tools that use it (browsers, servers, server-side scripting languages, XMLHTTP, et al). You could argue that these give you too much freedom and hence its possible to not behave RESTfully with them (though experience shows that those with the least freedom tend to actually force you to be unRESTful rather than to be RESTful). Personally I don't give a damn about testability of REST. I am a big fan of REST (evangelical even) but I will personally break every single principle of REST to get a job done. The reason I generally don't is that those principles give me real benefits. *THAT* is testable. Don't test REST, test its results.
Hi Mark, > I agree. Moreover, I also think you generally have different design > goals with client and server APIs. For example performance is > normally a lot more important for servers than clients. Also, a > simple API might be more desirable on the client than on the server > (again, because of performance). For these reasons it will likely do > more harm than good to try to use server abstractions in a client API > or vice-versa. I understand your point of view, but we also have to take into account applications that need high performance for clients too. Implementating transparent RESTful proxies for non-RESTful applications is a common use case where you need excellent performance, multi-threading on both sides of your application. FYI, we provide two client HTTP connectors (pluggable and using the same API), one based on JDK's HttpURLConnection and another based on Apache HTTP Client library. Also, in our case, you can directly set the client's output representation reference as the server's output representation, without reading/buffering anything. An optimized implementation of the Restlet API could even directly move bytes from client socket to server socket without consuming JVM memory thanks to Java's NIO. Anyway, we didn't find the sharing of common classes between the server-side and the client-side of the API to be an issue, it's even the opposite. For this design, we leveraged the notion of REST uniform interface (see our org.restlet.Uniform class) and the generic HTTP message aspects (entity headers common to both HTTP requests and responses). Just think about representations metadata (ETag, media type, encoding, etc.), their are strictly the same. Even the HTTP specification doesn't provide two separate definitions for requests and responses depending on whether you see the communication from the client end or from the server end of the connection, so why redefine them twice at the API level? Best regards, Jerome
Jérôme Louvel wrote: > Elliotte, > > Interoperability between clients and servers at the wire level is > different from consistency between client-side and server-side API > design, especially when several implementations of the same API are > expected. In our case (RESTful Web services API), the common > denominator should be all of HTTP 1.1 and just HTTP 1.1. Multiple implementations of the same API is rarely necessary and has resulted in overly complex designs throughout the Java class library. The big smell is use of the abstract factory design pattern. That's a sure sign of architecture astronautics and a confusing, hard-to-use, hard-to-deploy API. If multiple implementation are necessary, they should operate by replacing the JAR file with their own classes, not by using abstract factories to choose from among different classes at runtime. Abstract factories are only appropriate when the *same executing program* needs simultaneous access to more than one implementation. This rare and not the case here. Of course, adopting this principle would mean that the resulting work could not be bundled into the JDK. This would be a good thing. The JDK is already too big, and most people don't need this. This project makes a lot more sense a non-standard, open source development that can stand or fall on its own strengths and weaknesses, rather than one that is propped up by being bundled with the JDK. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 2/15/07, Bill de hOra <bill@...> wrote: > Any chance of getting httplib2 into the standard library Joe? That's always been a stated goal of the project, though I need to submit all the right forms and put a little more polish on the library before I actively start lobbying for that. -joe -- Joe Gregorio http://bitworking.org
On 2/15/07, Jrme Louvel <contact@...> wrote: > Hi Mark, > > > I agree. Moreover, I also think you generally have different design > > goals with client and server APIs. For example performance is > > normally a lot more important for servers than clients. Also, a > > simple API might be more desirable on the client than on the server > > (again, because of performance). For these reasons it will likely do > > more harm than good to try to use server abstractions in a client API > > or vice-versa. > > I understand your point of view, but we also have to take into account > applications that need high performance for clients too. Then they can use an > Implementating transparent RESTful proxies for non-RESTful > applications is a common use case where you need excellent > performance, multi-threading on both sides of your application. > > FYI, we provide two client HTTP connectors (pluggable and using the > same API), one based on JDK's HttpURLConnection and another based on > Apache HTTP Client library. > > Also, in our case, you can directly set the client's output > representation reference as the server's output representation, > without reading/buffering anything. An optimized implementation of the > Restlet API could even directly move bytes from client socket to > server socket without consuming JVM memory thanks to Java's NIO. > > Anyway, we didn't find the sharing of common classes between the > server-side and the client-side of the API to be an issue, it's even > the opposite. For this design, we leveraged the notion of REST uniform > interface (see our org.restlet.Uniform class) and the generic HTTP > message aspects (entity headers common to both HTTP requests and > responses). Just think about representations metadata (ETag, media > type, encoding, etc.), their are strictly the same. > > Even the HTTP specification doesn't provide two separate definitions > for requests and responses depending on whether you see the > communication from the client end or from the server end of the > connection, so why redefine them twice at the API level? Seriously? I already gave an example of why they might need to be different; because the requirements and/or design goals may be different. That's not saying there won't be any shared API - I expect you could share a bunch of "util" classes. But I don't think it should be a goal to be able to share important chunks of the API between client and server. If that happens because the goals are similar enough, great, but it shouldn't itself be an objective IMO. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Jérôme Louvel wrote: > In our case, we designed the Representation class like a Content (or > like an HTTP entity if you prefer) that can be either written to an > OutputStream (or to an NIO WritableByteChannel) or read by getting an > InputStream (or an NIO ReadableByteChannel). I haven't had as much time to spend getting familiar with RESTlet as I would like, but that's partially a function of its complexity, and that's partially a function of it working on both the client and the server. Programs that act as both HTTP clients and servers are rare. Usually I'm either writing a client or a server, not both. If RESTlet were a pure server API, it would be simpler and I could learn it and work with it faster. Ditto if it were a pure client API. I'm unlikely to want both at the same time in the same program but because you've tried to cover both use cases in the same product , it's become quite a bit larger than it needs to be. The learning curve is not linear. Complexity grows faster than the API footprint. Now are there times when I want both? Yes, as you point out; but it would be easier if I could learn them separately and treat them as separate pieces. Or if I could easily use one but not the other. For instance, I might want to use RESTlet for the server side but Apache HTTPClient for the client. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Marc Hadley wrote: > Hi Steve, > > You raise some interesting points, detailed discussion of which really > belong in the expert group once it is formed. We're still waiting to hear if everyone is welcome in the expert group forum or if it will be a closed discussion. I hope it's the former, but if it's the latter, I expect people will continue kibitzing here. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 2/15/07, Mark Baker <distobj@...> wrote: > On 2/15/07, Jrme Louvel <contact@...> wrote: > > Hi Mark, > > > > > I agree. Moreover, I also think you generally have different design > > > goals with client and server APIs. For example performance is > > > normally a lot more important for servers than clients. Also, a > > > simple API might be more desirable on the client than on the server > > > (again, because of performance). For these reasons it will likely do > > > more harm than good to try to use server abstractions in a client API > > > or vice-versa. > > > > I understand your point of view, but we also have to take into account > > applications that need high performance for clients too. > > Then they can use an Oops. Meant to say "They can use a high performance client API then". Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Elliotte Harold wrote: > Programs that act as both HTTP clients and servers are rare. > Usually I'm either writing a client or a server, not both. I'm very often writing programs that handle both sides of that equation at the same time. Usually my concerns about the app as a client and as a server are very different, so even then I'm not seeing a lot of value in there being all that much shared between the two.
Marc Hadley wrote: >> -a programmatic way to configure things. Right now the normal servlet >> API is purely declarative, except for system-specific APIs. I need the >> right to add an remove resource mappings dynamically. I would suggest that a critical use case is being able to manage this from a web-based interface; e.g. like WordPress is configured. That may not be part of the core or the RI, but it should be possible to design a web-based management system for a be site on top of this so that it is possible to map file systems and restlets (or whatever) to URL structures without having to drop down to code or config files. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Jérôme Louvel wrote: > Even the HTTP specification doesn't provide two separate definitions > for requests and responses depending on whether you see the > communication from the client end or from the server end of the > connection, so why redefine them twice at the API level? The spec defines what it is. The API defines what you can do with it. The different ends of the connection want to do different things with the medium of transfer and consequently view it in different ways. This is hardly unique to HTTP or software by the way. For instance, McDonald's sees a hamburger very differently than I as a customer do. For them it is an inventory item. For me it is lunch. Their operations on the hamburger include reordering and cooking. Mine include eating. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Jérôme Louvel wrote: > I understand your point of view, but we also have to take into account > applications that need high performance for clients too. > Implementating transparent RESTful proxies for non-RESTful > applications is a common use case where you need excellent > performance, multi-threading on both sides of your application. > There is not one API or library to rule them all, nor should their be, though standardization often attempts to create such a beast. Different use cases require different libraries and APIs, even when they're doing roughly the same thing, just at different scales. The JDK needs a solid client side HTTP API, one that's better than java.net.URL. This library should be optimized and designed for simple use cases, and be adequate up to about the use a web browser or Atom client would put it to. If you're building a network proxy server or a multimillion feed aggregator such as Bloglines, you need a different HTTP library to suit your needs. I do not think this use case is common enough to justify bundling such a thing with the JDK. Nor do I think we should try to make one library serve both ends of the spectrum. I suspect server side libraries should be limited to JEE and kept out of JSE in general. Not everything needs to be bundled with the JDK in order to be useful. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Elliotte,
> I haven't had as much time to spend getting familiar with RESTlet as
> I would like, but that's partially a function of its complexity, and
> that's partially a function of it working on both the client and the
> server.
In just two lines you can output the content of a Web page to the
console. I don't think it is that complex compared to other APIs I've
seen:
Client client = new Client(Protocol.HTTP);
client.get("http://www.restlet.org").getEntity().write(System.out);
Here is the simplest Restlet server you can build, no XML config
needed, no annotation, no container required, just a couple of JARs in
your classpath:
Restlet restlet = new Restlet() {
public void handle(Request request, Response response) {
response.setEntity("Hello World!", MediaType.TEXT_PLAIN);
}
};
// Create the HTTP server and listen on port 8182
new Server(Protocol.HTTP, 8182, restlet).start();
Of course, we support more complex cases like routing based on URI
templates, virtual hosting, transparent content negotiation.
> Programs that act as both HTTP clients and servers are rare.
> Usually I'm either writing a client or a server, not both.
I have seen other usages but that is not the most important point.
Just the fact that you can reuse your knowledge of the API when
writing a pure client then later a pure server seems useful no?
[...]
> Now are there times when I want both? Yes, as you point out; but it
> would be easier if I could learn them separately and treat them as
> separate pieces. Or if I could easily use one but not the other. For
> instance, I might want to use RESTlet for the server side but Apache
> HTTPClient for the client.
Nothing prevents you from doing that if you feel like it. The Restlet
framework never forces you to use the more complex artifacts if you
don't need them. But it seems nice to know they are here when needed.
Best regards,
Jerome
PS: The exact spelling is "Restlet" instead of "RESTlet" :-)
On Thu, Feb 15, 2007 at 12:03:12AM +0000, Bill de hOra wrote: > Nic James Ferrier wrote: > > > I was doing some openid hacking the other day with python's urllib2 - > > supposedly quite a modern HTTP tool. Frankly I'd rather have slammed > > my fingers in a door. It was a deeply unpleasant experience. > > Try this one: > > http://bitworking.org/projects/httplib2/ Does httplib2 provide a way to set timeouts? I didn't see that in the docs. <rant> That's been my biggest complaint with python's standard network client libraries. Everything is in blocking mode with no timeout. And the higher-level libraries like httplib don't provide any way to change that, so you're left with socket.setdefaulttimeout() which is (ugh) global. If you can get a handle to the underlying socket you can always call socket.settimeout(), but httplib makes that very inconvenient: you can't do it using the simple request() method. So instead you have to do the whole connect(), putrequest(), putheader(), endheaders(), send(), getresponse() sequence by hand; and you'd have to look at the source of httplib.py to find out that the socket is at self.sock after connect() has been called, so you then call foo.sock.settimeout(t). Which still doesn't help if the call to sock.connect() hangs, because HTTPConnection.connect() both creates the socket and then calls sock.connect(). As the socket module documentation says: "Note that the connect() operation is subject to the timeout setting, and in general it is recommended to call settimeout() before calling connect()." So you're stuck with needing to call the global socket.setdefaulttimeout(), or (probably better) write a subclass of HTTPConnection that isn't so stupid. The end result is that the easy stuff is useless in the real world, and the poor beginner has to read and understand the source code of httplib.py just to write an app that won't hang. That all adds up to a pretty huge WTF in my opinion. </rant> -- Paul Winkler http://www.slinkp.com
Paul Winkler <pw_lists@...> writes: <rant> <edited/> > That all adds up to a pretty huge WTF in my opinion. > </rant> I agree... the opportunity for reuse is very low. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Mark, > Seriously? I already gave an example of why they might need to be > different; because the requirements and/or design goals may be > different. Sorry, I missed your example. Could you point it to me? > That's not saying there won't be any shared API - I expect you could > share a bunch of "util" classes. But I don't think it should be a > goal to be able to share important chunks of the API between client > and server. If that happens because the goals are similar enough, > great, but it shouldn't itself be an objective IMO. In our case, that was a goal added soon in the design process when we realized we were duplicating too many artifacts. We even use the same API to access to the file system (using file:// URIs), to the classloader resources, to SMTP and JDBC servers. REST talks itself about proxies to non-HTTP protocols. http://roy.gbiv.com/pubs/dissertation/rest_arch_style.htm#sec_5_3_1 Of course, that isn't intended to replace dedicated SMTP, JDBC or File APIs, but we've found it to be a useful level of abstraction. Best regards, Jerome
At Thu, 15 Feb 2007 11:25:05 -0500, Elliotte Harold wrote: > The spec defines what it is. The API defines what you can do with it. > The different ends of the connection want to do different things with > the medium of transfer and consequently view it in different ways. This distinction between client & server APIs has hurt the REST style, in my opinion. An example: most *client* APIs don’t deal well with large request bodies. But they do deal well with large response bodies. With server APIs it is of course the other way round; and most of them insist on parsing your POSTed request body as x-www-url-encoded, even when it’s not. The REST style seems to encourage, as well, intermediary components: caches, gateways, etc. A uniform client/server http API makes these sort of components much easier to write. I would rather have seen a low-level API to get at the http spec as it is written, and then a simple API on top to get at the web as it is lived. best, Erik Hetzner
Erik Hetzner wrote: > An example: most *client* APIs don’t deal well with large request > bodies. But they do deal well with large response bodies. With server > APIs it is of course the other way round; and most of them insist on > parsing your POSTed request body as x-www-url-encoded, even when it’s > not. I don't see anything in the concept of either "client API" or "server API" that imlies "Please suck at the following depending on whether you are a client or a server".
On 2/15/07, Paul Winkler <pw_lists@...> wrote: > Does httplib2 provide a way to set timeouts? I didn't see that in the > docs. No, it doesn't, since it relies upon httlib for functionality, which has all the problems you enumerated. The only way to set timeouts for httplib2 is to set a global timeout for sockets. One of things I will eventually have to do is bring all the functionality of httplib into httplib2 so I can get my hands on the underlying socket, which is required for setting timeouts, using select(), building iterators for responses, etc. -joe -- Joe Gregorio http://bitworking.org
Elliotte, FYI, JAX-WS is part of J2SE6 [1][2][3] Whether one likes it or not, there is a tiny http server already in JDK6 named com.sun.net.httpserver.HttpServer [4]. One probable motivation for that was support for async scenarios in the WS-* specs. If i were you, i'd ask hard questions on what (if any!) this JSR 311 has to do with support for HTTP Binding in the WSDL 2.0 spec. Which is the reason you see JAX-WS mentioned in the description of this JSR. thanks, dims [1] http://www.1060.org/blogxter/entry?publicid=FBF8D553DE3C185F93BB36309619225C&token= [2] http://blogs.cocoondev.org/dims/archives/004717.html [3] http://www.theserverside.com/news/thread.tss?thread_id=43499 [4] http://www.google.com/search?hl=en&q=com.sun.net.httpserver.HttpServer&btnG=Google+Search On 2/14/07, Elliotte Harold <elharo@...> wrote: > > > > > > > It had to happen sooner or later. The big boys are waking up and > discovering REST, and naturally they want to protect all us little > developers from worrying our pretty little heads about nasty things like > HTTP and XML by creating easy-to-use REST frameworks: > > JSR-311 Java API for RESTful Web Services > http://jcp.org/en/jsr/detail?id=311 > > Remember, these are the same jokers who gave us servlets and the > URLConnection class as well as gems like JAX-RPC and JAX-WS. They still > seem to believe that these are actually good specs, and they are > proposing to tunnel REST services through JAX-WS (Java API for XML Web > Services) endpoints. > > They also seem to believe that "building RESTful Web services using the > Java Platform is significantly more complex than building SOAP-based > services". I don't know that this is false, but if it's true it's only > because Sun's HTTP API were designed by architecture astronauts who > didn't actually understand HTTP. This proposal does not seem to be > addressing the need for a decent HTTP API on either the client or server > side that actually follows RESTful principles instead of fighting > against them. > > To give you an idea of the background we're dealing with here, one of > the two people who wrote the proposal "represents Sun on the W3C XML > Protocol and W3C WS-Addressing working groups where he is co-editor of > the SOAP 1.2 and WS-Addressing 1.0 specifications. Marc was > co-specification lead for JAX-WS 2.0 (the Java API for Web Services) > developed at the JCP and has also served as Sun's technical lead and > alternate board member at the Web Services Interoperability Organization > (WS-I)." > > The other submitter seems to be a primary instigator of the Fast Infoset > effort to hide XML in binary goop. > > This is like asking Karl Rove and Dick Cheney to write the Democratic > Party platform. > > Do we really want to trust these folks to define the official Java spec > for REST? Please read the JSR, and send comments to jsr-311-comments@... > > I hope we can derail this completely, but we probably can't. If not, are > there any JSR members here who might join the working group and bring > some sanity and actual REST experience to the development of the > eventual specification? If we can't stop it, maybe we can at least limit > the damage. > > -- > Elliotte Rusty Harold elharo@... > Java I/O 2nd Edition Just Published! > http://www.cafeaulait.org/books/javaio2/ > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ > -- Davanum Srinivas :: http://wso2.org/ :: Oxygen for Web Services Developers
http://www.soafacts.com/ via http://arcware.net/archive/2007/02/15/SOA-Facts.aspx
At Thu, 15 Feb 2007 19:05:44 +0000, Jon Hanna <jon@...> wrote: > > Erik Hetzner wrote: > > An example: most *client* APIs don’t deal well with large request > > bodies. But they do deal well with large response bodies. With server > > APIs it is of course the other way round; and most of them insist on > > parsing your POSTed request body as x-www-url-encoded, even when it’s > > not. > > I don't see anything in the concept of either "client API" or "server > API" that imlies "Please suck at the following depending on whether you > are a client or a server". I was a bit unclear. What I’m suggesting is that client & server APIs have different ways of getting at a message’s body. So server APIs make it easy to return a large body of data in response to a request, and client APIs make it easy to process a large body of data returned as the body of a response, but neither make it easy to deal with a large body in a request message. Server APIs & implementations make it easy to get at the parameters of a POSTed x-www-url-encoded body, but they don’t make it easy to get at a request message body’s bytestream. Let’s say I’m coding a very simple proxy (if I make an obvious mistake, let me know here; I’ve never coded a proxy). If I get a POST request, I want to grab my request headers, figure out where I’m sending it, make a new request to the upstream server, then pipe the request body’s stream to an upstream request body’s stream and flush it. It has been my experience that this activity is complicated by a lack of uniformity between client & server APIs. best, Erik Hetzner
On Thursday, February 15, 2007, at 08:31PM, "Alan Dean" <alan.dean@...> wrote: >http://www.soafacts.com/ REST can kill both, SOA *and* Chuck Norris - in a single nano second. Jan > >via > >http://arcware.net/archive/2007/02/15/SOA-Facts.aspx > > > > >Yahoo! Groups Links > > > > >
On Thu, Feb 15, 2007 at 02:17:32PM -0500, Joe Gregorio wrote:
> On 2/15/07, Paul Winkler <pw_lists@...> wrote:
> > Does httplib2 provide a way to set timeouts? I didn't see that in the
> > docs.
>
> No, it doesn't, since it relies upon httlib for functionality, which
> has all the problems you enumerated. The only way to set timeouts
> for httplib2 is to set a global timeout for sockets.
If you're willing to subclass, it's not so bad.
Here's a quick start:
class SlightlyBetterHTTPConnection(httplib.HTTPConnection):
"""HTTPConnection subclass that supports timeouts"""
def __init__(self, host, port=None, strict=None, timeout=None):
httplib.HTTPConnection.__init__(self, host, port, strict)
self.timeout = timeout
def connect(self):
"""Connect to the host and port specified in __init__."""
# Mostly verbatim from httplib.py.
msg = "getaddrinfo returns an empty list"
for res in socket.getaddrinfo(self.host, self.port, 0,
socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
try:
self.sock = socket.socket(af, socktype, proto)
# Different from httplib: support timeouts.
if self.timeout is not None:
self.sock.settimeout(self.timeout)
# End of difference from httplib.
if self.debuglevel > 0:
print "connect: (%s, %s)" % (self.host, self.port)
self.sock.connect(sa)
except socket.error, msg:
if self.debuglevel > 0:
print 'connect fail:', (self.host, self.port)
if self.sock:
self.sock.close()
self.sock = None
continue
break
if not self.sock:
raise socket.error, msg
--
Paul Winkler
http://www.slinkp.com
On 2/15/07, Paul Winkler <pw_lists@...> wrote: > If you're willing to subclass, it's not so bad. > Here's a quick start: Excellent. Thanks, -joe > > class SlightlyBetterHTTPConnection(httplib.HTTPConnection): > """HTTPConnection subclass that supports timeouts""" > > def __init__(self, host, port=None, strict=None, timeout=None): > httplib.HTTPConnection.__init__(self, host, port, strict) > self.timeout = timeout > > def connect(self): > """Connect to the host and port specified in __init__.""" > # Mostly verbatim from httplib.py. > msg = "getaddrinfo returns an empty list" > for res in socket.getaddrinfo(self.host, self.port, 0, > socket.SOCK_STREAM): > af, socktype, proto, canonname, sa = res > try: > self.sock = socket.socket(af, socktype, proto) > # Different from httplib: support timeouts. > if self.timeout is not None: > self.sock.settimeout(self.timeout) > # End of difference from httplib. > if self.debuglevel > 0: > print "connect: (%s, %s)" % (self.host, self.port) > self.sock.connect(sa) > except socket.error, msg: > if self.debuglevel > 0: > print 'connect fail:', (self.host, self.port) > if self.sock: > self.sock.close() > self.sock = None > continue > break > if not self.sock: > raise socket.error, msg > > -- > > Paul Winkler > http://www.slinkp.com > -- Joe Gregorio http://bitworking.org
Hi Alan, On Feb 15, 2007, at 11:29 AM, Alan Dean wrote: > http://www.soafacts.com/ Thanks! That was precious. My favorite (especially in the context of this group :-) is: > Ancient lore promises the day when a single unifying technology > will bring openness and peace to all lands. That technology is not > SOA. because SOA killed that technology. :-) -- Ernie P.
Elliotte, I've worked with Marc Hadley on WS-Addressing, XML Protocol, URI Templates and WADL. While he's done a lot of work on WS-*, he also has done work on REST/HTTP side of things. He's very open to learning things so I don't believe he's trying to "push" a particular agenda. He's smart, dedicated, funny, practical, and exactly the right kind of person to work on this. I like the fact that the WADL work has Java bindings so it's not just something abstract or "architecture astronaut"ish as you testily describe it. While I agree that improvements can be done on servets, URLConnection, and JAX-WS, I disagree that this should be derailed in any way. I plan on contributing and helping, and I encourage others to do the same. Cheers, Dave _____ From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Elliotte Harold Sent: Wednesday, February 14, 2007 6:56 AM To: REST Discuss Subject: [rest-discuss] Sun proposes to apply Web service standardization principles to REST It had to happen sooner or later. The big boys are waking up and discovering REST, and naturally they want to protect all us little developers from worrying our pretty little heads about nasty things like HTTP and XML by creating easy-to-use REST frameworks: JSR-311 Java API for RESTful Web Services http://jcp.org/ <http://jcp.org/en/jsr/detail?id=311> en/jsr/detail?id=311 Remember, these are the same jokers who gave us servlets and the URLConnection class as well as gems like JAX-RPC and JAX-WS. They still seem to believe that these are actually good specs, and they are proposing to tunnel REST services through JAX-WS (Java API for XML Web Services) endpoints. They also seem to believe that "building RESTful Web services using the Java Platform is significantly more complex than building SOAP-based services". I don't know that this is false, but if it's true it's only because Sun's HTTP API were designed by architecture astronauts who didn't actually understand HTTP. This proposal does not seem to be addressing the need for a decent HTTP API on either the client or server side that actually follows RESTful principles instead of fighting against them. To give you an idea of the background we're dealing with here, one of the two people who wrote the proposal "represents Sun on the W3C XML Protocol and W3C WS-Addressing working groups where he is co-editor of the SOAP 1.2 and WS-Addressing 1.0 specifications. Marc was co-specification lead for JAX-WS 2.0 (the Java API for Web Services) developed at the JCP and has also served as Sun's technical lead and alternate board member at the Web Services Interoperability Organization (WS-I)." The other submitter seems to be a primary instigator of the Fast Infoset effort to hide XML in binary goop. This is like asking Karl Rove and Dick Cheney to write the Democratic Party platform. Do we really want to trust these folks to define the official Java spec for REST? Please read the JSR, and send comments to jsr-311-comments@ <mailto:jsr-311-comments%40jcp.org> jcp.org I hope we can derail this completely, but we probably can't. If not, are there any JSR members here who might join the working group and bring some sanity and actual REST experience to the development of the eventual specification? If we can't stop it, maybe we can at least limit the damage. -- Elliotte Rusty Harold elharo@metalab. <mailto:elharo%40metalab.unc.edu> unc.edu Java I/O 2nd Edition Just Published! http://www.cafeaula <http://www.cafeaulait.org/books/javaio2/> it.org/books/javaio2/ http://www.amazon. <http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/> com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Jerome, There's a huge difference between a protocol specification such as HTTP and an API specification for a protocol. HTTP is the same for both client and sender because they both see the same things on the wire. An API could be dramatically different based upon the needs of clients vs services, and what kind of infrastructure is available for use. I'm not saying JSR 311 should have separate APIs for client/server, just pointing out that there are scenarios where it is desirable. Cheers, Dave _____ From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jrme Louvel Sent: Thursday, February 15, 2007 7:48 AM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: Another potential problem with the 311 proposal Hi Mark, > I agree. Moreover, I also think you generally have different design > goals with client and server APIs. For example performance is > normally a lot more important for servers than clients. Also, a > simple API might be more desirable on the client than on the server > (again, because of performance). For these reasons it will likely do > more harm than good to try to use server abstractions in a client API > or vice-versa. I understand your point of view, but we also have to take into account applications that need high performance for clients too. Implementating transparent RESTful proxies for non-RESTful applications is a common use case where you need excellent performance, multi-threading on both sides of your application. FYI, we provide two client HTTP connectors (pluggable and using the same API), one based on JDK's HttpURLConnection and another based on Apache HTTP Client library. Also, in our case, you can directly set the client's output representation reference as the server's output representation, without reading/buffering anything. An optimized implementation of the Restlet API could even directly move bytes from client socket to server socket without consuming JVM memory thanks to Java's NIO. Anyway, we didn't find the sharing of common classes between the server-side and the client-side of the API to be an issue, it's even the opposite. For this design, we leveraged the notion of REST uniform interface (see our org.restlet.Uniform class) and the generic HTTP message aspects (entity headers common to both HTTP requests and responses). Just think about representations metadata (ETag, media type, encoding, etc.), their are strictly the same. Even the HTTP specification doesn't provide two separate definitions for requests and responses depending on whether you see the communication from the client end or from the server end of the connection, so why redefine them twice at the API level? Best regards, Jerome
Hi Dave, I think we agree. HTTP clients and servers do received the same Data but they use it in different ways. The Restlet API reuses the same classes to model the Data exchanged but has separate classes to process it, depending on your role in the HTTP exchange (client or server). See Data definition in REST: http://roy.gbiv.com/pubs/dissertation/rest_arch_style.htm#sec_5_2_1 My point is that there's no binary separation, life is a bit more complex here. Some artifacts can be shared between a client and server (the data and metadata), while others have to be designed specifically for clients or for servers. In the Restlet API, we've tried to find a balance between both needs: reuse and specificity. > I'm not saying JSR 311 should have separate APIs for client/server, > just pointing out that there are scenarios where it is desirable. In my opinion, JSR 311 should exclusively target the server-side role. The true challenge it to properly map between object-oriented domain objects (POJOs) and RESTful resources and representations. This is comparable to the mapping between POJOs and RDBMS (EJB/JPA/Hibernate) or between POJOs and RDF graphs (Sommer). Best regards, Jerome
On 2/16/07, Dave Orchard <orchard@...> wrote: > > > Jerome, > > There's a huge difference between a protocol specification such as HTTP and an API specification for a protocol. HTTP is the same for both client and sender because they both see the same things on the wire. An API could be dramatically different based upon the needs of clients vs services, and what kind of infrastructure is available for use. I'm not saying JSR 311 should have separate APIs for client/server, just pointing out that there are scenarios where it is desirable. > > Cheers, > Dave At the same time, on the assumption that servers often talk to remote systems, having the ability to chain stuff across the connection is kind of useful. In particular -a way to marshall failures without losing useful diagnostics can be handy -if the client is built around futures, state machines or some alternative to blocking RPC/RMI as a metaphor, then you'd like it somehow to integrate well with the server. -steve
Just wanted to send you guys a link to PayPal's new "Name Value Pair" API. They've obvious recognized the complexity of SOAP is not the best, but not the REST is a better alternative. I blogged about it at [1]. OTOH, I'd be curious to know how you guys think it should have been implemented if it were RESTful. Thus far I believe I can mostly recognized RESTfulness, but still don't know how to design for it. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..." [1] http://blog.welldesignedurls.org/2007/02/16/paypals-new-name-value-pair-api/
"Mike Schinkel" <mikeschinkel@...> writes: > Just wanted to send you guys a link to PayPal's new "Name Value Pair" API. > They've obvious recognized the complexity of SOAP is not the best, but not > the REST is a better alternative. I blogged about it at [1]. > > OTOH, I'd be curious to know how you guys think it should have been > implemented if it were RESTful. Thus far I believe I can mostly recognized > RESTfulness, but still don't know how to design for it. It's similar to the JSON/RPC approach. You make an encoded call and get the data back. It's interesting that people are still using this pattern. It's kinda the starting point for WS-* and XML-RPC and all those. I don't have much of a problem with this particular pattern I have to say. I'd prefer a proper RESTfull API with hypermedia as the engine of application state. But this is a pragmatic alternative. If paypal are prepared to scale this app (in my view this is the killer reason for using REST) then it's fine. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Nic James Ferrier wrote: > If paypal are prepared to scale this app (in my > view this is the killer reason for using REST) then it's fine. That comment seems contradictory. Did I misread? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
"Mike Schinkel" <mikeschinkel@...> writes: > Nic James Ferrier wrote: >> If paypal are prepared to scale this app (in my >> view this is the killer reason for using REST) then it's fine. What I mean is: RESTfull apps scale better than anything else. If you're not using REST you have to scale your app through more hardware and bandwidth investment. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
Nic James Ferrier wrote: > > Nic James Ferrier wrote: > >> If paypal are prepared to scale this app (in my view this is the > >> killer reason for using REST) then it's fine. > > What I mean is: > > RESTfull apps scale better than anything else. If you're not > using REST you have to scale your app through more hardware > and bandwidth investment. Ah, I see. Still, it's a shame they didn't choose to go with something more RESTful as I'm ure many developers will see what they did and emulate it. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
For those people who are not subscribed to the apache jcp-open mailing list, in which apache participation in/votes on JCP proposals are discussed, here is the apache response. At Roy's request there's going be a name change to make it less "Java API For REST" to something less all-embracing, I dont know what, maybe "Java-API-to-keep-Java-Relevant-after-WS-*-failed" Given that sun's RI will be in glassfish, there's no financial value in keeping the TCK secret, so with any luck it will be open, and hosted on java.net .If it is fully open, then everyone can participate. Justin Erenkrantz is probably going to be apache lead, with various hangers on. Craig Russel (Tomcat) , Dims (Axis2), maybe me and dan diephouse. I am still trying to decide how much copious free time it merits. -steve -------- Original Message -------- Subject: JSR-311 : that "REST" stuff that Roy keeps carrying on about... Date: Sat, 17 Feb 2007 06:56:46 -0500 From: Geir Magnusson Jr. To: jcp-open :) Last night, Sun informed the EC regarding the changes made to title and package name. I voted yes this morning with the comment : "The ASF thanks the spec lead for making the changes to the title and package name as noted in a mail to the EC on 2/16/07 - this was a key requirement for us. We'd also like to encourage the spec lead to consider creating the software portion of the TCK as open source software, allowing community participation in designing as well as implementing the tests. There is successful precedent for this, and it wouldn't necessarily put compatibility of independent implementations nor the spec leads ability to command support or other revenue at risk if this path was chosen. Finally, we strongly urge the spec lead to "utilize" rather than "leverage" an open source development model for this JSR, to encourage as much participation from the community as possible - the community has a tremendous amount of experience using this architectural model, and it behooves the spec lead to take advantage of known best practices."
On Wed, 2007-02-14 at 22:21 +0000, Nic James Ferrier wrote: > One would have to come up with a list of attributes of RESTfull > systems: > - what's the content types? > - does it support HEAD properly? > - what happens when you do methods that aren't supported? etc... > and then apply weights to each of them. > Then write a client to test a webapp for each of the attributes (we > might call them constraints /8-) and then produce a report. I think writing a client to automate anything will be difficult for the most important aspects of an architecture. Here is my quick list: Ad hoc interoperability, vertical scalability * Identifiers should fall into two overlapping classes: ** Identifiers that select a particular piece of information to manipulate, and ** Identifiers that select a place where further data can be added (eg POST(a) or POST(p)) * Every resource should implement as many of the architecture's interaction patterns as makes sense. eg GET, PUT, POST, DELETE, HEAD. ** Content returned as the same document type by the same url should not normally differ on other inputs such as cookies or basic authentication, though personalisation can rub this up a little. * Content types should ** Be free from verbs (except for code-on-demand), representing pure data. ** Encode similar data schemas into the same document type ** Be standard wherever possible ** Include standard documents as sub-documents wherever appropriate ** Extend standard document types through subclassing when variation is required ** Invent new document types when nothing out there is a good fit Evolvability: * Components should ignore features of a document they do not understand * Components should avoid wildcard matches in parsing that might lead them to interpret a feature they do not understand as if it were one they do understand * Components should not alter their processing depending on a version number found in a document * If incompatible changes occur to a document type, it should normally be given a new document type identifier * Components should continue to support legacy features and interface until such time as it is known that they are no longer in use Horizontal scalability: * When a server does stores state due to client requests, that state is either: ** Hidden from view, or ** Made visible as a resource that can be further retrieved and manipulated * Server-side state storage should be avoided when no money is changing hands to support this storage. This can be said for sessions, pub/sub, and other times when the server stores state either temporarily or perminently. Paying for bandwidth almost involves funny money, but storage can really cost you. It hurts horizontal scalability, increases complexity, and generally makes life hard. That said, most services need to store some state. Just don't to any more than you have to. Benjamin.
Hmmm not sure if I'm over simplifying but in a nutshell these protocols are just trying to repeatedly POST/PUT to a resource identified by a unique id. I think you can just rely on PUT for idempotency and the rest just depends on who you want to generate the id. If the clients can generate unique ids, then why don't they just repeatedly PUT to http://example.com/<myuniqueid> ? If you'd rather the server "allocates" ids then >> POST /factory << 201 Created << Location: /factory/<auniqueid> Then then client repeatedly PUTs to http://example.com/factory/<auniqueid> There seems to be some worry about "reclaiming" an unused id after a period of time, but is that really a concern? You don't really "allocate" or "do" anything on the POST. You just make sure you never send back the same id twice which isn't that hard. You don't even need to keep track of what you've sent. If you're worried about the client making an id up, you could either a) not care -- this is essentially back to the client-generated-id case, or b) sign your ids so you can detect a fraud. Am I missing something? Andrew Wahbe
On Thu, 2007-02-15 at 10:19 -0500, Mark Baker wrote: > On 2/15/07, Elliotte Harold <elharo@...> wrote: > > The only way to get the ideal client API is to design just a client > API. > > The only way to get the best serve side API is to design just a > server > > side API. If REST is done right, then we shouldn't need to > coordinate > > the two design efforts. The interfaces between the two should be > limited > > to HTTP. > I agree. Moreover, I also think you generally have different design > goals with client and server APIs. For example performance is > normally a lot more important for servers than clients. Also, a > simple API might be more desirable on the client than on the server > (again, because of performance). For these reasons it will likely do > more harm than good to try to use server abstractions in a client API > or vice-versa. I am of the opinion that it can be useful to share the same abstraction, though it has to be fairly finely tuned and you may still have some tweaks on one side or the other. I believe I am credited with helping inspire J�r�me's writing of the restlet framework, and I have developed a similar framework in-house for the company I work fork. It started out as mostly an interface class for interacting with a remote server, however we now also use it as part of the mechanism to accept client requests. I combine it with a simplified routes-like mechanism to locate an appropriate resource object which can then have calls made upon it. Most of our software acts both as client to something and server another. This is enterprise-scale REST rather than Web-scale. I admit it isn't perfect, but the opportunities to chain in extra authentication mechansims and other features have proven useful. It has also been easy to write little proxies to translate from one protcol to another, for example from HTTP to our internal protocols. In short, I think having a uniform interface in code has been a valuable way to bring the architectural constraints we are looking for closer to the developer. Benjamin
Thanks for this. It's a fantastic list! Benjamin Carlyle <benjamincarlyle@...> writes: > * Every resource should implement as many of the architecture's > interaction patterns as makes sense. eg GET, PUT, POST, DELETE, > HEAD. Surely this doesn't need to be tested? An application is not non-RESTfull if it doesn't implement a method that we may think it should. That is subjective. > * Content types should > ** Be free from verbs (except for code-on-demand), representing pure > data. > ** Encode similar data schemas into the same document type > ** Be standard wherever possible > ** Include standard documents as sub-documents wherever appropriate > ** Extend standard document types through subclassing when variation is > required > ** Invent new document types when nothing out there is a good fit All of these seem testable to me. > Evolvability: > * Components should ignore features of a document they do not understand > * Components should avoid wildcard matches in parsing that might lead > them to interpret a feature they do not understand as if it were one > they do understand > * Components should not alter their processing depending on a version > number found in a document > * If incompatible changes occur to a document type, it should normally > be given a new document type identifier > * Components should continue to support legacy features and interface > until such time as it is known that they are no longer in use Most of these look like client side things. It has just occured to me that automated client testing looks very difficult. > Horizontal scalability: > * When a server does stores state due to client requests, that state is > either: > ** Hidden from view, or > ** Made visible as a resource that can be further retrieved and > manipulated > * Server-side state storage should be avoided when no money is changing > hands to support this storage. This can be said for sessions, pub/sub, > and other times when the server stores state either temporarily or > perminently. Paying for bandwidth almost involves funny money, but > storage can really cost you. It hurts horizontal scalability, increases > complexity, and generally makes life hard. That said, most services need > to store some state. Just don't to any more than you have to. I think this boils down to a question of not doing cookies or URL rewriting? -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On Sun, 2007-02-18 at 20:50 +0000, wahbedahbe wrote:
> Hmmm not sure if I'm over simplifying but in a nutshell these
> protocols
> are just trying to repeatedly POST/PUT to a resource identified by a
> unique id. I think you can just rely on PUT for idempotency and the
> rest
> just depends on who you want to generate the id.
> If the clients can generate unique ids, then why don't they just
> repeatedly PUT to http://example.com/<myuniqueid> ?
I was having this thought just last night, and I can't say I disagree :)
Just use a PUT and let the client pick a globally unique ID. The server
can then either continue using the resource the client identified in the
first place for redirect the client for to the "real" name of the
resource further interactions. I would lean towards the former.
It's an interesting perspective that essentially elminates the POST at
most once issue by eliminating POST. Do we really need our scruffy
fourth method?
> If you'd rather the server "allocates" ids then
> >> POST /factory
> << 201 Created
> << Location: /factory/<auniqueid>
> Then then client repeatedly PUTs to
> http://example.com/factory/<auniqueid>
This was one of the proposals I wrote up early in the thread, though it
still has its problems in terms of server resources needing to be
cleaned up.
> There seems to be some worry about "reclaiming" an unused id after a
> period of time, but is that really a concern? You don't really
> "allocate" or "do" anything on the POST. You just make sure you never
> send back the same id twice which isn't that hard. You don't even need
> to keep track of what you've sent. If you're worried about the client
> making an id up, you could either a) not care -- this is essentially
> back to the client-generated-id case, or b) sign your ids so you can
> detect a fraud.
This would still be an issue if you did the redirection thing, or if the
resource was destroyed soon after creation. You would need to keep both
urls around in case the client continued to send PUT requests to their
defined url. It's a horizontal scalability thing. If you need to store
information based on a client request, then you need all of the servers
in your cluster that might recieve the next request to know that
information before the next request comes in. It's statelessness between
requests, a fuzzy but important constraint in REST. Ignore it at your
peril, but most of us will need to bend it to varying degrees. The long
an the short of it is whenever and however far you bend it, make sure
you're getting paid enough to deal with the scalability issues the
bending introduces.
Let me do the concrete proposal thing again.
Problem statement: (same as before)
I have some state that I want to append to a resource. The right method
according to HTTP is POST, but if I don't get a response to my POST I
don't know whether or not to retry.
Client algorithm:
...
guid = generateCryptographicallySafeGloballyUniqueID();
// Perhaps this uri template is retrieved from a GET to what would have
// been our factory resource, or from a broader form that the user
// filled-out.
request.populateURITemplate("http://airline.example.com/ticketsales/{user}/{guid}",user,guid);
startOrResetTimer(reasonable resource state retention period, eg 2min);
try
{
retryPUT:
factory.PUT(request);
}
catch (NoResponse) // aka GatewayTimeout
{
// One of two possibilities exist. Either,
// * our PUT didn't arrive, or
// * our created resource has been destroyed already
// We try to ensure that the latter doesn't
// happen by giving up after a reasonable
// period, though we may have confidence that it won't
// be destroyed quickly and just keep retrying
goto retryPUT;
}
catch (RetentionPeriodTimeout)
{
// It is still possible that we could successfully
// send the request at this point... but it could have
// been created and destroyed again given the time we
// have taken so far. We had better give up.
}
catch (...)
{
// Normal error handling
}
If we get a 201 we know that this is the first successful request. If we
get a 200 we know that a previous request had already succeeded, but we
have successfully changed state. A 410 Gone might indicate that the
resource was created and then subsequently destroyed. Some other return
codes might mean successful delivery, eg Not Modified might play some
part. However we should probably keep things simple.
I would suggest keeping the URI ranges user-specific for security and
cross-pollentation reasons. We don't want one user going and blatting
over the uri-space and preventing other valid requests from getting
through. Client GUIDs should probably be cryptographically safe to avoid
giving away secret information.
Server responsibilities:
* Don't destroy the resource too quickly, or the client won't know for
sure whether it was created in the first place. Consider leaving a 410
in place for some time if the resource is destroyed quickly.
Benjamin
G'day, I have taken a second stab at a RESTwiki article called "REST In Plain English"[1]. I suspect it really needs at least a third significant stab before it is "ready", but I would like to start soliciting feedback on the current draft. It attempts to be REST in practice rather than REST from first principles. It doesn't actually go and list out REST constraints. Instead, it tries to explain the differences between an unconstrained architecture and a RESTful architecture from a participant or manager's perspective. A quick extract: REST is a set of rules that an architecture should conform to... It isn't surprising that an unconstrained architecture has its problems... REST eliminates ad hoc messages and radically shifts the focus of API development towards defining pieces of information that can be retrieved and manipulated... The uniform interface is meant to evolve over time. I'm interested in feedback on what is really missing from the article as well as commentry on the content. I want it to remain precise and clear but high-level. My focus is on architecture as a vehicle for ensuring components can talk to each other and that various other architectural properties such as scalability and evolvability are met. Benjamin. [1] http://rest.blueoxen.net/cgi-bin/wiki.pl?RestInPlainEnglish
Benjamin Carlyle wrote:
>
>
> On Sun, 2007-02-18 at 20:50 +0000, wahbedahbe wrote:
> > Hmmm not sure if I'm over simplifying but in a nutshell these
> > protocols
> > are just trying to repeatedly POST/PUT to a resource identified by a
> > unique id. I think you can just rely on PUT for idempotency and the
> > rest
> > just depends on who you want to generate the id.
> > If the clients can generate unique ids, then why don't they just
> > repeatedly PUT to http://example.com/ <http://example.com/><myuniqueid> ?
>
> I was having this thought just last night, and I can't say I disagree :)
>
> Just use a PUT and let the client pick a globally unique ID.
that's how email works (client side generation). When this issue came up
on atompub (for the atom:id element on creating a new member entry), it
more or less split the working group.
Ultimately if you can't trust clients, you can't trust the IDs they
generate (for example, duplicating or gaming atom:id is an obvious spam
vector).
There's another point. If you are asking clients to generate uids to
stuff into URLs, then you are breaking with the idea that URLs are
opaque to clients. This might or might not be a problem, but it's such
an important principle it has to asked. For example, without machine
readable deployments of URI templates, how do I know to compose the URL
{http://example.com/}{myuniqueid}
in a way that doesn't bake in the first part?
> The server
> can then either continue using the resource the client identified in the
> first place for redirect the client for to the "real" name of the
> resource further interactions. I would lean towards the former.
If you are having to issue redirects, I think you might as well let the
server issue the id to begin with.
> It's an interesting perspective that essentially elminates the POST at
> most once issue by eliminating POST. Do we really need our scruffy
> fourth method?
It's not really about method choice imo; it's about administrative
control of the resource namespace. Servers are best placed to manage
their own URIs.
cheers
Bill
Hi. I am new to REST. A while ago I saw David Heinemeier-Hanssons brilliant keynote at RailsConf in summer 2006 [1]. He described the new Rails 1.2 features, including REST as a new way of programming with Rails. After watching that video I dove deep into stuff about REST like videos, blog entries and articles about this fascinating topic. That way I found this wounderful group a few weeks ago. It's a pleasure to read your posts! Now I got stuck. Perhaps the following question is something somebody asked before in this group, but I'm afraid I couldn't find neither the question nor the answer within the ~8000 posts of this group. David said in the video you should use own representations for aggregations, states and events. He constructed an example with persons and clubs. He asked: should persons know their clubs or should clubs know their members? He answered, that both solutions are not right, because it feels not good to ask a club for persons or a person for club. Clubs shouldn't know anything about persons and persons shouldn't know anything about clubs. He introduced a new class called Membership and objects from that class encapsulate the knowledge of one person and one club. He designed this litte REST system with three representations: person, club and membership. That looked pretty cool to me! A membership is something explicit, and not such implicit like having a list of persons in club or a list of clubs in person. What would be if you would have to design invalide memberships or a description of a membership or special memberships like V.I.P.s? That's ugly without having the class Membership, isn't it? Okay, my world was enlightened. Everything was fine. I walked six inches over the ground for the next few days or so. But then I asked myself (and asked, ahem, harassed a lot of friends and colleagues) if this could be the answer to every aggregation (and state and event). David said it is, but he hasn't said why it is, or I haven't heard or understand what he said. So this is the question: would you design, in a RESTful way, every aggregation (and state and event) with a class respectivly with a representation? Some minor questions: If yes: what's with a shopping cart and products in it. What would that aggregating representation be named? If no: what does David meant in his keynote or what am I missing here? Thank you very much to everyone who can show me the right way. Links to previous posts or articles covering this issue are also very welcome. Bernd [1] http://www.scribemedia.org/2006/07/09/dhh
On 2/18/07, Benjamin Carlyle <benjamincarlyle@...> wrote:
> This was one of the proposals I wrote up early in the thread, though it
> still has its problems in terms of server resources needing to be
> cleaned up.
That's actually a non-problem if you design your PUT URIs well.
For example, the PUT URI could contain a time stamp. (presume you
can make them all unique, if you can't then add in a counter).
The idea is that there is nothing to store. If a PUT is attempted at
a URI with a time stamp that is too old then just reject it. No need to
store anything since the time stamp is in the URI.
Don't trust your clients? Add a hash that verifies the time stamp+counter:
http://example.org/temporary/{timestamp}+{counter}+{hash}/
Once a PUT is successful you can redirect to the right 'final' URI.
-joe
--
Joe Gregorio http://bitworking.org
Thanks everyone again for your invaluable feedback on 'REST for the Rest of Us' [1]. I've finally integrated most of it. While doing so, I realized that a REST Lexicon [2] for quick reference would be really useful as well. Is there such a thing I can borrow definitions from? Attribution will be done by links. Any pointers/suggestions appreciated. Cheers, - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org [1] http://doc.opengarden.org/Articles/REST_for_the_Rest_of_Us [2] http://doc.opengarden.org/Articles/REST_for_the_Rest_of_Us/REST_Lexicon
"Bernd Schiffer" <schifferbernd@...> writes:
> Hi.
>
> I am new to REST.
Welcome!
> So this is the question: would you design, in a RESTful way, every
> aggregation (and state and event) with a class respectivly with a
> representation?
I think you've been saying 'aggregation' for 'aggregation by
relationship' (aggregation does not necessarily have to be by
relationship).
But not all relationships ('aggregation' in your question) are
supposed to be exposed to outside. Rather, you should be exposing only
relationships that are worthy to be considered as a resource in your
particular context.
For example, it is OK to have a MEMBERSHIP resource which allows
consumer to manipulate MEMBERSHIP relationship. However, it may or may
not be appropriate to expose an ADDRESS resource which allows consumer
to manipulate a user's address. Instead, you could manipulate the
address through the USER resource.
REST is resource-centric. What a resource is depends on the
context on hand. There is no always nor never.
> Some minor questions:
> If yes: what's with a shopping cart and products in it. What would
> that aggregating representation be named?
The shopping cart.
Bear in mind that the shopping cart is already a name for the
relationship between goods the store has and goods you want to buy.
Hope that helps.
YS.
I've once again been impossibly busy, so everyone that was watching this
thread has no doubt forgotten what it was all about...
I was trying to explain my REST pattern:
>>So - I promote a /symmetric/ REST point of view, with
>>active resources being dependent on each other and conveying
>>state between themselves with either GET or POST depending
>>on which party initiates the transfer.
>>
>>I do hope and believe this pattern is still REST-compatible.
>>Please read part 3 of my series
>>(http://duncan-cragg.org/blog/post/business-functions-rest-dialogues/)
>>for more explanation of this pattern.
>>
I'm hoping to see if 'Symmetric REST' is a clean subset of REST - i.e.,
adding more constraints, not fewer or different ones.
Symmetric REST has definitive answers to the following issues that recur
so often on this list:
-: The meaning (nearly said 'semantics'!) of POST
-: Client state, cookies, and user identity
-: Using PUT/POST/DELETE vs just using POST
-: Queries
-: Opaque/Transparent URIs
The essence of my pattern is that, instead of some opaque client that's
outside of the world of resources, we see each HTTP GET or POST as being
performed on behalf of a peer resource, and see them both as simply
forms of state transfer between resources - one pull, the other push.
A specific case of this is shown in a form POST, the data of which has a
content-type, but no URI! So we can't subsequently save a reference to
the POSTed data and GET it again later. The POST is the one and only
chance we have to see the data.
Another case in point is client state stored in Cookies. It feels like
it should be a URI identifying the user ('s browser/machine).
When a server receives a POST, it cares who sent it and whether they are
authorised to try and affect things on the server. Each submitter may
have different motives and variants on things being POSTed, and the
server has to reconcile them itself - it can't necessarily just do what
the clients want. We have client sources wanting something, and server
resources wanting perhaps something else.
So a POST is as much about the client as it is about the target resource
- perhaps more so. In a Resource-Oriented Architecture, we really
should be dignifying POST data with a URI and talking about its valid
content-types in the same way as we do for server resources we would GET.
A form submission, complete with cookie, is an announcement of the
'state of the user' - the cookie /is/ like a URI and the POST body is a
notification of their 'state' (of mind at the time).
A client is now just another 'server' with /resources/ that themselves
perform GETs and POSTs and which can, symmetrically, be targets of GETs
and POSTs. We may need Comet-like patterns to get this to work in a browser.
So - a resource GETs to pull the state of a peer resource, and POSTs to
push its /own/ state to a peer. Better still, a resource can push its
URI (perhaps in a Content-Location: header in a request message), and
let the target GET the content when its ready. Adding Content-Location:
to a request header is perhaps a little non-HTTP, but shouldn't be non-REST.
Now, all resources are active entities responsible for their own destiny
and interact using GET/POST, or 'using REST'.
Benjamin Carlyle had some excellent things to say, as always, so I'd
like to pick up from there:
> I haven't read your content in detail as yet, but you also seem to be
> including a pub/sub mechanism in your model. Again without knowing how
> much of this you have covered exactly, subscription also has its
> complications :)
>
Yes - the next obvious step is to implement pub/sub: we can keep track
of those peer resources that are interested in us, and POST to them our
state whenever it changes. Alternatively, we can use one of the many
open- or long-GET patterns to receive updates. Better still, POST or
long-GET the /fact/ of a change, then use normal GET to fetch the new
state, allowing normal caches to fill up and allowing lazy cacheing.
What, um, 'complications' did you have in mind? Time out of
subscriptions, scalability of vast numbers of subscriptions, that kind
of thing?
>>-: Alternatively - what seems to be the subject of this thread -
>>it may have *real-world dependency*: maybe it can't just switch
>>to 'running' until the real world thing it models actually /is/
>>running! So, when it receives a direct transformation intent,
>>it goes off and satisfies that constraint by ensuring it's
>>ticking over in reality, and only then changes its visible
>>state to 'running'.
>
> I'm a SCADA guy, so this is a kind of resource that comes frequently to
> mind for me. This kind of resource can have knock-on effects also. If I
> start a fan in a chiller plant for a building I am likely to see changes
> to the resources demarcating temperature guage state. These changes slip
> between resources via the implementation of these resources,
> specifically the monitoring of changes to real world conditions.
>
I was a SCADA guy. Nearly 30 years ago, but it's still in me blood.. =0)
The 'feeling' of REST and declarative approaches generally is the same
as that I used to get when designing logic circuits, and especially
controllers, in my teenage (it's OK, I discovered girls eventually)!
>>-: Finally, the resource may be *smart*, and decide to switch to
>>'running' because of the rule that, as long as Joe's resource is
>>running, it should be running itself. So it spots Joe's resource
>>running, and starts running without even being told to! That's what I
>>was talking about in part 3 of my dialogues.
>
> I suspect this is also the kind of resource that models most business
> functions... though I would like to cut to the specifics. I see a set of
> resources as an API to a service that expose its functionality in an
> architecturally-consistent way. Importantly, they are not services in
> their own right. They share state with each other, but this is not the
> same as communicating with each other by RESTful means. They are
> implemented with objects or with embedded database procedures. These
> implementation-level entities talk to each other. That interaction is
> what affects the service's resources.
>
> So you have a service that is managing which other
> services/devices/functions are running in its system. It observes a
> change in one, and starts the other. The actual observation could be an
> object notifying others via an observer pattern, .. or pub/sub
> notification mechanism or by GET polling.
>
> The systems I work with tend to have a lot of pub/sub relationships to
> trigger knock-on behaviours between services. This is necessary because
> changes to the real world are unpredictable to even the most aware
> components in the architecture. Within a service we would typicically be
> talking about the observer pattern. ..
>
Now - this is where you get to the heart of the issue (excellent
response! thanks): I /am/ suggesting that our in-process programs should
be event-driven, and use the observer pattern as the only interaction
mode between domain objects. Objects will have open state and no methods
- apart from GET and POST. So, OK, they're not really objects in the
traditional sense. This is where we switch from imperative to
declarative models. We can program in declarative rules, triggered by
events over state. Then distributing such a program is simply 'draw a
dividing line through and drop HTTP - used symmetric-RESTfully - in the
middle'! Needless to say, SEDA would be a good fit to this, server-side..
Quickly on the remaining points (this has turned out to be too long
again) - URIs: opaque - if you want content and content syntax, it goes
in the body - end of discussion =0) So Queries: POST-redirect pattern.
PUT/DELETE: if you want imperative stuff like that, stick it in a
special 'Builder' content type and POST it instead - you have much freer
reign to do all sorts of other hypermedia manipulations anyway like
that, plus it prevents people seeing REST as = CRUD and expecting a
database... [sorry to any recent Rails arrivals who are bemused by all
this!]
OK - I've gone on enough. Any comments (even "wha'???") are welcome.
In fact, especially "wha'???" - it helps me hone my presentation! =0)
Cheers!
Duncan
PS - Joe: did you get my email? Anyway, thanks for the good stuff at
http://wellformedweb.org/story/1. What do you /now/ think about the Well
Formed Web?
_________________________________
Duncan Cragg
http://duncan-cragg.org/blog/
I'm getting a gut feel that Atom (both the format and the publishing protocol) is going to be very useful and used for many other things other than blog publishing. It has some very interesting characteristics that make it a good fit for various integration scenarios, amongst other things. Philosophically, it can almost be perceived as a simpler, easier, REST- oriented alternative to SOAP. We may finally have something to counter the dirtiness that is SOAP, since Atom can be a very good enveloping mechanism. Most of the current hype about Atom is about blogs. Which might not be a bad thing, since it will keep other uses under the radar and in stealth mode (from the evil Corporate IT types who don't know any better ;-) ), till we have some "but it just works" deployments to show off. I'm curious if anyone else is using/has used/ is considering ATOM and/or APP for other purposes, especially in the integration space? Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
"REST replacement for SOAP"... ha ... true, and sad, because for all its ambitions APP is just another envelope for documents. It's not much of an improvement over doc style SOAP. Just being RESTful isn't enough to encourage widespread adoption. Yay: there's no verb in the message. Yay, they obey HTTP method semantics. Apart from that they both wrap a document in an envelope. As long as servers and clients have to agree ahead of time on complete document semantics they're forcing one another to lock in code at development time. But the aspect of the web that promotes independent evolution is the dialog that goes on between server and agent -- in particular forms. Servers tell agents how to compose messages; they don't agree on message formats ahead of time. I don't think APP is a failure. I'm just saying you won't see significantly more "programmable web" using APP than you would have with SOAP. Don't you start me talking! Hugh On 2/20/07, Andrzej Jan Taramina <andrzej@...> wrote: > I'm getting a gut feel that Atom (both the format and the publishing > protocol) is going to be very useful and used for > many other things other than blog publishing. It has some very > interesting characteristics that make it a good fit for various > integration scenarios, amongst other things. > > Philosophically, it can almost be perceived as a simpler, easier, REST- > oriented alternative to SOAP. We may finally have something to counter the > dirtiness that is SOAP, since Atom can be a very good enveloping mechanism. > > Most of the current hype about Atom is about blogs. Which might not be a bad > > thing, since it will keep other uses under the radar and in stealth mode > (from the evil Corporate IT types who don't know any better ;-) ), till we > have some "but it just works" deployments to show off. > > I'm curious if anyone else is using/has used/ is considering ATOM and/or APP > for other purposes, especially in the integration space? > > > > Andrzej Jan Taramina > Chaeron Corporation: Enterprise System Solutions > http://www.chaeron.com > > > > > Yahoo! Groups Links > > > > -- Hugh Winkler Wellstorm Development http://www.wellstorm.com/ +1 512 694 4795 mobile (preferred) +1 512 264 3998 office
On Tue, Feb 20, 2007 at 10:50:10AM -0500, Andrzej Jan Taramina wrote: > I'm curious if anyone else is using/has used/ is considering ATOM > and/or APP for other purposes, especially in the integration space? http://2006.xmlconference.org/programme/presentations/202.html I wrote to the author to inquire if there were more detailed notes or an audio or video recording available... unfortunately, there isn't. -- Paul Winkler http://www.slinkp.com
Definitely. Just check out Yahoo Pipes or any of the Google (near)APP services. On 2/20/07, Andrzej Jan Taramina <andrzej@...> wrote: > I'm getting a gut feel that Atom (both the format and the publishing > protocol) is going to be very useful and used for > many other things other than blog publishing. It has some very > interesting characteristics that make it a good fit for various > integration scenarios, amongst other things. > > Philosophically, it can almost be perceived as a simpler, easier, REST- > oriented alternative to SOAP. We may finally have something to counter the > dirtiness that is SOAP, since Atom can be a very good enveloping mechanism. > > Most of the current hype about Atom is about blogs. Which might not be a bad > > thing, since it will keep other uses under the radar and in stealth mode > (from the evil Corporate IT types who don't know any better ;-) ), till we > have some "but it just works" deployments to show off. > > I'm curious if anyone else is using/has used/ is considering ATOM and/or APP > for other purposes, especially in the integration space? > > > > Andrzej Jan Taramina > Chaeron Corporation: Enterprise System Solutions > http://www.chaeron.com > > > > > Yahoo! Groups Links > > > > -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
On Feb 20, 2007, at 5:11 PM, Hugh Winkler wrote: > I don't think APP is a failure. I'm just saying you won't see > significantly more "programmable web" using APP than you would have > with SOAP. I beg to differ -- REST + APP just means some more constraints, which IMO is a good thing for increasing understanding, which furthers the "programmable Web". Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Tuesday, February 20, 2007, at 04:52PM, "Andrzej Jan Taramina" <andrzej@...> wrote: >I'm curious if anyone else is using/has used/ is considering ATOM and/or APP >for other purposes, especially in the integration space? I spent most of my non-working hours and even some of the working hours during the last year on this topic. IMHO, Atom/APP can be for the Enterprise Space (large scale, machine-2-machine oriented systems) what HTML is for the human oriented Web. With HTTP alone, the design space is too unconstrained for an average developer/designer to get real work done but with the introduction of Atom/APP (and a core set of extensions) design becomes manageable for them. Yes, definitely - there seems to be a bright future for that marriage. Jan > > >Andrzej Jan Taramina >Chaeron Corporation: Enterprise System Solutions >http://www.chaeron.com > > > > >Yahoo! Groups Links > > > > >
On 2/20/07, Stefan Tilkov <stefan.tilkov@...> wrote: > On Feb 20, 2007, at 5:11 PM, Hugh Winkler wrote: > > > I don't think APP is a failure. I'm just saying you won't see > > significantly more "programmable web" using APP than you would have > > with SOAP. > > I beg to differ -- REST + APP just means some more constraints, which > IMO is a good thing for increasing understanding, which furthers the > "programmable Web". > I guess we could find some extra constraints in APP. maybe the absence of a verb in the message definition is the major one. Good on them. I cannot figure out why APP needs an envelope at all. It makes sense for web sites exposing feeds with > 1 entry to use a format that rolls up each items metainfo with the item itself -- call that a list of entries. That's not APP; it's just serving up feeds. But when I'm POSTing or PUTting content, I could just POST/PUT the html/xhtml/text/whatever representation directly to the content URI. Entity headers could capture anything they currently model in an <entry>. APP could simply be a list of entity header definitions. But they've forced implementors to model something called an <entry> which just wraps the real content. Then there's this indirection to get to the real content. In that respect, APP did not enhance my understanding. Hugh
Funny, I'm just now in the process of formulating a simple demonstration of how to manage bibliographic metadata with APP. Has anyone else documented their use of APP for non-blogging applications anywhere? I have run across gdata and pipes--what are the other prominent implementations? //Ed
Hugh: > "REST replacement for SOAP"... ha ... true, and sad, because for all > its ambitions APP is just another envelope for documents. It's not > much of an improvement over doc style SOAP. Just being RESTful isn't > enough to encourage widespread adoption. Yay: there's no verb in the > message. Yay, they obey HTTP method semantics. Apart from that they > both wrap a document in an envelope. I'm not sure I agree with your carte blanche reasoning that an envelope is a bad idea. Having a simple enveloping structure can help divorce transmission semantics from document semantics in a nice way. SOAP is just too complicated, too rigid and too tied to other crap like XSDs, WSDLs and the like, and fosters RPC thinking more than not. The Atom format looks like a much nicer, lightweight framework for information publishing and consuming. And the APP provides behavioural rules on how to transport that format over HTTP, basically providing an off-the- shelf, standardized REST mechanism, which is not a bad thing. > As long as servers and clients have to agree ahead of time on complete > document semantics they're forcing one another to lock in code at > development time. I don't think we're going to see a resolution of that semantic agreement in my lifetime. At least not that is understandable and implementable by mere mortals (most semantic web initiatives are not either). And I see nothing wrong with solving businsess problems by designing and formalizing document semantics between involved parties in the meantime. Better than not solving the problems. A good example is the Integrated Health Record holy grail that most western countries are desperate to move towards. You're not going to resolve the semantic issues in that arena using automated approaches any time soon. T > But the aspect of the web that promotes independent > evolution is the dialog that goes on between server and agent -- in > particular forms. Servers tell agents how to compose messages; they > don't agree on message formats ahead of time. You know, my inquiry was to do with integration schenarios, machine to machine, between existing systems for the most part. Don't much care about the web as you define it above, in the context of what I'm working on of course, except as a global transmission and connectivity mesh. > I don't think APP is a failure. I'm just saying you won't see > significantly more "programmable web" using APP than you would have > with SOAP. Like I said, I'm interested in more classic integration scenarios and not the concept of a "programmable web", and that is the context within which I want to discuss Atom and APP applicability. > Don't you start me talking! Why not? That was the intent of my post. To initiate a discussion of the merits and possibilities of using Atom and APP for things other than blogs. ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
Jan: > I spent most of my non-working hours and even some of the working hours > during the last year on this topic. I knew I wasn't alone in this thinking! > IMHO, Atom/APP can be for the Enterprise Space (large scale, machine-2-machine > oriented systems) what HTML is for the human oriented Web. With HTTP alone, > the design space is too unconstrained for an average developer/designer to get > real work done but with the introduction of Atom/APP (and a core set of > extensions) design becomes manageable for them. Are there any references, documents, discussion, examples of such usage and/or extensions that you can point me at? > Yes, definitely - there seems to be a bright future for that marriage. Super! Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
> Definitely. Just check out Yahoo Pipes or any of the Google (near)APP > services. I've been meaning to check out Pipes for a while....but what I'm thinking of is more industry-specific integration scenarios. Retail, Manufacturing, Healthcare, Government, Aerospace and other uses for integration purposes. Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
Hugh: > I think we will. The technique we end up using will come from some > effort like microformats. It probably won't have a ton to do with OWL. > Some people will just tag elements in forms and declare what > "author-name" means, and off we go. Sure, for specific problem areas or needs. ATOM/RSS did just that for blog publishing and aggregation for the most part, at least at some conceptual level. My comment was more about overall semantics, that are not tied to a particular application or solution segment. Where any arbitrary server and client could just decide to communicate and have it happen automagically. I just don't see that happening any time soon. Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
oh, and... > On 2/20/07, Andrzej Jan Taramina <andrzej@...> wrote: > > > You know, my inquiry was to do with integration schenarios, machine to > > machine,... I'm talking about machine-tot-machine integration scenarios, but perhaps not the painful "classic" ones. New ones that are less painful. ;) Hugh
On 2/20/07, Andrzej Jan Taramina <andrzej@...> wrote: > I don't think we're going to see a resolution of that semantic agreement in > my lifetime. I think we will. The technique we end up using will come from some effort like microformats. It probably won't have a ton to do with OWL. Some people will just tag elements in forms and declare what "author-name" means, and off we go. > > > But the aspect of the web that promotes independent > > evolution is the dialog that goes on between server and agent -- in > > particular forms. Servers tell agents how to compose messages; they > > don't agree on message formats ahead of time. > > You know, my inquiry was to do with integration schenarios, machine to > machine,... I'm interested in more classic integration scenarios and not the > concept of a "programmable web", and that is the context within which I want > to discuss Atom and APP applicability. > I think you will find APP enterprisey enough for those scenarios... Hugh
On 2/20/07, Andrzej Jan Taramina <andrzej@...> wrote: > Hugh: > > > I think we will. The technique we end up using will come from some > > effort like microformats. It probably won't have a ton to do with OWL. > > Some people will just tag elements in forms and declare what > > "author-name" means, and off we go. > > Sure, for specific problem areas or needs. ATOM/RSS did just that for blog > publishing and aggregation for the most part, at least at some conceptual > level. Agreed. You could extend the APP work to define how to tag form elements. > > My comment was more about overall semantics, that are not tied to a > particular application or solution segment. Where any arbitrary server and > client could just decide to communicate and have it happen automagically. I > just don't see that happening any time soon. > I don't think it will happen in a big bang either. It will happen in particular solution segments first. Then there will be some bleedover. Then certain winners will emerge from the tagwars. Hugh
> I don't think it will happen in a big bang either. It will happen in > particular solution segments first. Then there will be some bleedover. > Then certain winners will emerge from the tagwars. I'm with you on that. We'll see data formats and semantic standards in disparate industry segments. What I doubt is that will then somehow amalgamate together into one massive glop of semantic goodness, where anyone can talk to anyone, automagically. Just don't see it. Love the "tagwars" term. LOL One of the things that is attractive about using Atom/APP for enterprise integration (specifically between disparate enterprises, where one can't dictate to the other) is it's very simplicity. Easy to implement and get a prototype running to show value. Then not too hard to extend to provide more ROI. I've always preferred evolutionary approaches to tough integration issues. Crawl, Walk, Run. If you believe the WS-GLOP vendors, they'll get you to hypersonic, interstellar speed, skipping the more prosaic crawl/walk/run stages. Problem is, the only thing that shows that kind of exponential path in that scenario is the vendor's bank account. ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
On 20.02.2007, at 20:37, Andrzej Jan Taramina wrote: >> Definitely. Just check out Yahoo Pipes or any of the Google (near) >> APP >> services. > > I've been meaning to check out Pipes for a while.... I am having applying-Atom/APP-to-ETL on my list of stuff to think about.... Yahoo Pipes are wonderfully near that, in a sense. Jan
Hi all, Apologies for the cross-posting, but (IMHO) this is a pretty significant development for both communities: RESTful Rails Development by Ralf Wirdemann & Thomas Baustert http://www.b-simple.de/documents As most of you know, Rails 1.2 made a pretty dramatic shift towards RESTful web applications. This document (recently translated from the German) is the definitive summary of what REST looks like from the perspective of Ruby on Rails -- which I suspect is where many new programmers are going to first experience REST! So, I encourage y'all to check it out. And, if you find areas where you think Rails isn't being RESTful enough, let me know and I'll collate the feedback for Mr. Rails, David H. (though I think he's on at least one of these lists himself). -- Ernie P. RESTful Rails Development PDF Released http://www.rubyinside.com/restful-rails-development-pdf- released-392.html POST BY PETER COOPER A month ago I reported on the release of a PDF (in German) covering Rails' REST abilities by Ralf Wirdemann and Thomas Baustert (the authors of the first German Rails book, "Rapid Web Development mit Ruby on Rails"). With the help of Florian Grsdorf and Adam Groves, they've produced a fine English translation titled "RESTful Rails Development". It's still free (although donations are accepted). It's only about thirty pages long, but in that space it packs in a lot of information about Rails and REST, including REST routing, URLs, view techniques, path methods, and how to nest resources. If, like me, you're a Rails developer who's pretty savvy at the 1.0 level but haven't made the leap into the world of REST, it's a great primer.
On Tue, 2007-02-20 at 10:50 -0500, Andrzej Jan Taramina wrote: > I'm getting a gut feel that Atom (both the format and the publishing > protocol) is going to be very useful and used for > many other things other than blog publishing. It has some very > interesting characteristics that make it a good fit for various > integration scenarios, amongst other things. > > Philosophically, it can almost be perceived as a simpler, easier, > REST- > oriented alternative to SOAP. We may finally have something to counter > the > dirtiness that is SOAP, since Atom can be a very good enveloping > mechanism. HTTP is REST's current alternative to SOAP +enough-WSDL-to-make-SOAP-do-something-useful. Atom is more like REST's (someone's?) alternative to RDF. That is to say, it compliments a growing family of standard document types that can be included in each other, and can be extended for domain-specific uses without inventing a whole new vocabulary. RDF builds an abstract metamodel and expects vocabularies to fill the rest of the gaps. Successful formats to date have taken a different approach of defining vocabulary and document structure together. XML appears to be a good tool in defining these document types. It allows other XML documents to be easily included as sub-documents. It allows existing document types to be extended. Combined with mime, it even has a means to be explicit about the extension. I might define an industry-specific application/pids+atom+xml document type that could be interpreted purely as atom, but also has special features and vocabulary that can be handled by parsers aware of the relevant extensions. I see the aggregation and sub-classing features of XML as the core of the semantic web going forwards. I think this is a more fully-featured way of representing data than rigid RDF. Perhaps my main concern going forwards is the modern propensity to try and replace MIME with URIs. If I have an xml uri of http://w3c.org/xml and want to subclass it my url would likely look something like <http://wrsa.com.au/something>. In this case it is difficult for parsers to know that they can treat my document type as xml for the purpose of running xpath queries. I think mime and URIs both have their problems, but I think the problems with URIs are more terminal when it comes to building a real semantic web. Benjamin.
Andrzej Jan Taramina wrote:
>
>
> I'm getting a gut feel that Atom (both the format and the publishing
> protocol) is going to be very useful and used for
> many other things other than blog publishing. It has some very
> interesting characteristics that make it a good fit for various
> integration scenarios, amongst other things.
>
> Philosophically, it can almost be perceived as a simpler, easier, REST-
> oriented alternative to SOAP.
The format yes, and I've been saying this for some time. I expect SOAP
stacks to start supporting Atom envelopes at some point, and then it
will be game over.
> I'm curious if anyone else is using/has used/ is considering ATOM and/or
> APP
> for other purposes, especially in the integration space?
Here you go:
Example:
-----------
[[[
<atom:entry
xmlns:iam="http://example.org/Schemas/Envelope"
xmlns:event="http://example.org/event/"
xmlns:atom="http://www.w3.org/2005/Atom">
<atom:title>Envelope Seen</atom:title>
<atom:link
href="http://example.org/mid/000001103A48103500C000A8002E000A4E0E125C"
/>
<atom:id>http://example.org/BizTalkInChannel/831bf016fad89f64f8411bb3c89a6ccb</atom:id>
<atom:updated>2007-01-20T12:15:09Z</atom:updated>
<atom:summary>Envelope seen on BT-IN channel</atom:summary>
<event:EventSource>http://example.org/channel/in/BizTalkInChannel</event:EventSource>
<event:EventLevel>http://example.org/event/level/info</event:EventLevel>
<atom:category term="envelope" scheme="http://example.org/cat/"/>
<atom:category term="event" scheme="http://example.org/cat/"/>
<atom:content type="text/xml">
...
</atom:content>
</atom:entry>
]]]
The Atom entry elements are populated as follows :
* atom:title: not important for processing, can appear in monitoring UI
* atom:link: not important for processing, provides a link to the
event on the source system (this might result in a 404)
* atom:id: unique identifier for the event. Important for duplicate
detection.
* atom:updated: the time the event was created at source
* atom:summary: not important for processing, can appear in
monitoring UI, might be useful in describing a system event.
* atom:category: the 'term' attribute indicates the event type if
the "scheme" attribute is 'http://example.org/cat/'. Other schemes are
not defined.
In addition, two extension elements are inserted into the entry
* event:EventSource: provides a URI that names the source of the Event.
* event:EventLevel: provides a URI that indcates the
level/severity of the event.
========
Event Levels and Types
========
Event Levels
---------
The level of an event is provided in the event:EventLevel extension
element. Levels in the element take the form of URIs as follows:
http://example.org/event/{LEVEL}
The defined levels are
* "trace": for testing
* "debug": a debug message
* "info": informative; also the default level for a message event
* "warn": a warning, typically used in system events
* "error": a error, typically used in system events
* "critical": a severe event, typically used in system events
Event Types
-----------
Use of atom:category
The type of an event is provided in the atom:category element. Types use
the 'scheme' and 'term' attributes. The 'scheme' attribute for a type is
http://example.org/cat/
Categorisation of events
------------
An event entry contains 2 such atom:categories.
First, all event entries contain an atom:category stating that they are
a monitoring event, using a "term" attribute value of 'event' as follows:
<atom:category term="event" scheme="http://example.org/cat/"/>
Second, events are categorised as either system or mesage types, using
"term" attributes value of 'system' and 'envelope' respectively:
* System event:
<atom:category term="system" scheme="http://example.org/cat/"/>
* Message event:
<atom:category term="envelope" scheme="http://example.org/cat/"/>
Benjamin Carlyle wrote: > HTTP is REST's current alternative to SOAP > +enough-WSDL-to-make-SOAP-do-something-useful. Except that sending a HTTP envelope over XMPP is awkward. Atom travels over both, trivially for entries (feeds are more tied to HTTP than first meets the eye). And being XML it's structurally closer to SOAP; so it's les of a cognitive leap than going back to MIME. Anyone strategically invested in SOAP will be able to spin for Atom support to complement a REST checkbox. There are general problems with using XML as an envelope insofar as embedding payloads phsyically inside elements can result in all kinds of suckage - look at the half-decade of back and forth that has resulted in mtom and the pita that xml dsig has proven to be. I'm not sure MIME+octets isn't a superior packaging technology if packaging is what you need. "It's not much of an improvement over doc style SOAP. Just being RESTful isn't enough to encourage widespread adoption. Yay: there's no verb in the message. Yay, they obey HTTP method semantics. Apart from that they both wrap a document in an envelope. As long as servers and clients have to agree ahead of time on complete document semantics they're forcing one another to lock in code at development time" I see Hugh is beating up on APP again. Hopefully we'll see him on the WG for last call "adding future business value". The last part by the way about complete document semantics simply isn't true if your document semantics are based on model theoretic semantics - the agents communities have been slinging documents back and forth for years without been tied to runtimes. It's just that programming to a model theory is esoteric compared to switch-on-type; programmers generally won't roll with it. But I'll say this - the first thing to do here is compare and contrast the minimal Atom document with SOAP's before trashing Atom. I did the minimal SOAP envelope years ago and that strongly influenced a professional commitment not to deploy on it for clients. Atom at least makes you say something. Atom is more like REST's > (someone's?) alternative to RDF. RDF has characteristics that make it better than Atom (or any such XML/XHTML format) as an interlingua, but isn't quite structured enough as a format. That means to support RDF you tend to end up interfering with backend application storage to support the flexibility (IME). You can sink a lot of money doing that. Because Atom is less flexible it's easier to tie down wrt OO domain and relational backends. The only highly generic thing you need to support across domain types are tags, and there are some decent many 2 many table models you can use for that now. > That is to say, it compliments a > growing family of standard document types that can be included in each > other, and can be extended for domain-specific uses without inventing a > whole new vocabulary. > > RDF builds an abstract metamodel and expects vocabularies to fill the > rest of the gaps. Successful formats to date have taken a different > approach of defining vocabulary and document structure together. I would add also that successful formats have left format semantics in code rather than using declarative/model techniques. Elliotte has historically had a good basic stance on why this gets deployed over shared formal _models_. > I see the aggregation and sub-classing features of XML as the core of > the semantic web going forwards. I think this is a more fully-featured > way of representing data than rigid RDF. I'm not so sure. I think linking and tagging are the way forward, and the person who persuades people to go with prefix conventions for tag labels instead of full URIs as RDF/semweb does, will win. No-one uses "dc:" in XML for anything other than Dublin core. cheers Bill
On 2/20/07, Bill de hOra <bill@...> wrote: > "It's not much of an improvement over doc style SOAP. Just being RESTful > isn't enough to encourage widespread adoption. Yay: there's no verb in > the message. Yay, they obey HTTP method semantics. Apart from that they > both wrap a document in an envelope. As long as servers and clients have > to agree ahead of time on complete document semantics they're forcing > one another to lock in code at development time" > > I see Hugh is beating up on APP again. Hopefully we'll see him on the WG > for last call "adding future business value". I guess I am beating up on it, and unfairly. I'm raising topics here, not on Atompub, because it's late in the APP cycle, the forms idea is not baked, and the envelope thing might have been a good issue to raise two years ago. Unfairly, because it's like I've been scolding the dog for doing a bad job mowing the yard (an old Far Side cartoon). APP is going to work great for blogging clients, and for GData which will be widely adopted. >The last part by the way > about complete document semantics simply isn't true if your document > semantics are based on model theoretic semantics - the agents > communities have been slinging documents back and forth for years > without been tied to runtimes. It's just that programming to a model > theory is esoteric compared to switch-on-type; programmers generally > won't roll with it. > Well, yeah. We really need to find a way to make it practical to do reasoning over these tags. It's hard because: there are lots of definitions, the definitions are not centrally located, and even if you get them all in one place, reasoning over them is a massive program in itself. So I think to make it practical, programs have to call a service to do it for them. If we don't address that issue, web services will be stuck forever. Each integration will be painful and time consuming. As now. > But I'll say this - the first thing to do here is compare and contrast > the minimal Atom document with SOAP's before trashing Atom. I did the > minimal SOAP envelope years ago and that strongly influenced a > professional commitment not to deploy on it for clients. Atom at least > makes you say something. > Fair enough. > > Atom is more like REST's > > (someone's?) alternative to RDF. > > RDF has characteristics that make it better than Atom (or any such > XML/XHTML format) as an interlingua, but isn't quite structured enough > as a format. That means to support RDF you tend to end up interfering > with backend application storage to support the flexibility (IME). You > can sink a lot of money doing that. Because Atom is less flexible it's > easier to tie down wrt OO domain and relational backends. The only > highly generic thing you need to support across domain types are tags, > and there are some decent many 2 many table models you can use for that now. > > > > That is to say, it compliments a > > growing family of standard document types that can be included in each > > other, and can be extended for domain-specific uses without inventing a > > whole new vocabulary. > > > > RDF builds an abstract metamodel and expects vocabularies to fill the > > rest of the gaps. Successful formats to date have taken a different > > approach of defining vocabulary and document structure together. > > I would add also that successful formats have left format semantics in > code rather than using declarative/model techniques. Elliotte has > historically had a good basic stance on why this gets deployed over > shared formal _models_. > > > I see the aggregation and sub-classing features of XML as the core of > > the semantic web going forwards. I think this is a more fully-featured > > way of representing data than rigid RDF. > > I'm not so sure. I think linking and tagging are the way forward, and > the person who persuades people to go with prefix conventions for tag > labels instead of full URIs as RDF/semweb does, will win. >No-one uses > "dc:" in XML for anything other than Dublin core. > Yes, the web is voting, and they are not even using prefixes, much less URIs or RDF. > cheers > Bill >
On 20 Feb 2007, at 21:58, Benjamin Carlyle wrote: > On Tue, 2007-02-20 at 10:50 -0500, Andrzej Jan Taramina wrote: > > I'm getting a gut feel that Atom (both the format and the publishing > > protocol) is going to be very useful and used for > > many other things other than blog publishing. It has some very > > interesting characteristics that make it a good fit for various > > integration scenarios, amongst other things. > > > > Philosophically, it can almost be perceived as a simpler, easier, > > REST- > > oriented alternative to SOAP. We may finally have something to > counter > > the > > dirtiness that is SOAP, since Atom can be a very good enveloping > > mechanism. > > HTTP is REST's current alternative to SOAP > +enough-WSDL-to-make-SOAP-do-something-useful. Atom is more like > REST's > (someone's?) alternative to RDF. That is to say, it compliments a > growing family of standard document types that can be included in each > other, and can be extended for domain-specific uses without > inventing a > whole new vocabulary. Nonsense. APP is just a way to publish content. It is a simpler version of WebDAV. See http://blogs.sun.com/bblfish/entry/what_atom_is_all_about You can publish RDF using APP, just as you can publish RDF using WebDAV. You don't even have to wrap what you publish using atom. See section 5.3 http://bitworking.org/projects/atom/draft-ietf-atompub- protocol-13.html#rfc.section.5.3 And neither does the server have to wrap what he receives in atom. Furthermore one can give a very good RDF ontology of Atom. Atom Owl Ontology: http://bblfish.net/work/atom-owl/2006-06-06/AtomOwl.html which one can then use to query information source, such as the Roller blog engine SPARQLing Roller: http://blogs.sun.com/bblfish/entry/sparqling_roller > RDF builds an abstract metamodel and expects vocabularies to fill the > rest of the gaps. Successful formats to date have taken a different > approach of defining vocabulary and document structure together. XML > appears to be a good tool in defining these document types. It allows > other XML documents to be easily included as sub-documents. It allows > existing document types to be extended. Combined with mime, it even > has > a means to be explicit about the extension. I might define an > industry-specific application/pids+atom+xml document type that > could be > interpreted purely as atom, but also has special features and > vocabulary > that can be handled by parsers aware of the relevant extensions. Yes, and that is the big problem with trying to use syntax to exchange data. You end up having to do double the amount of work other people have to do. Not only do you have to model the types of objects that exist and the way they relate to each other, you also have to find an arbitrary tree structure to fit them it. All XML Roads Lead to RDF: http://blogs.sun.com/bblfish/entry/how_applying_xml_to_data > > I see the aggregation and sub-classing features of XML as the core of > the semantic web going forwards. I think this is a more fully-featured > way of representing data than rigid RDF. No, it is xml that is rigid. See Crystalising RDF http://blogs.sun.com/bblfish/entry/crystalizing_rdf > Perhaps my main concern going > forwards is the modern propensity to try and replace MIME with > URIs. If > I have an xml uri of http://w3c.org/xml and want to subclass it my url > would likely look something like <http://wrsa.com.au/something>. In > this > case it is difficult for parsers to know that they can treat my > document > type as xml for the purpose of running xpath queries. I think mime and > URIs both have their problems, but I think the problems with URIs are > more terminal when it comes to building a real semantic web. > > Benjamin. > > > >
Hugh Winkler wrote: > On 2/20/07, Bill de hOra <bill@...> wrote: > >> The last part by the way >> about complete document semantics simply isn't true if your document >> semantics are based on model theoretic semantics - the agents >> communities have been slinging documents back and forth for years >> without been tied to runtimes. It's just that programming to a model >> theory is esoteric compared to switch-on-type; programmers generally >> won't roll with it. >> > > Well, yeah. We really need to find a way to make it practical to do > reasoning over these tags. It's hard because: there are lots of > definitions, the definitions are not centrally located, and even if > you get them all in one place, reasoning over them is a massive > program in itself. So I think to make it practical, programs have to > call a service to do it for them. No argument there. I'd love to read a nutshell analysis as to why people aren't falling over themselves for those kinds of technologies (kr, rules, agents, semweb) - they directly address multi-billion dollar issues in the industry (specifically integration and change resilience). I've always seem application protocols as the "software agent" equivalent of grunting. cheers Bill
On Feb 20, 2007, at 5:53 PM, Henry Story wrote: > You don't even have to wrap what you publish using atom. See > section 5.3 > http://bitworking.org/projects/atom/draft-ietf-atompub- > protocol-13.html#rfc.section.5.3 You mean like when you are posting media resources? [1] > And neither does the server have to wrap what he receives in atom. When posting media resources the app server MUST respond with a link to the media link entry which is a URI for an atom:entry. Perhaps I'm missing what you mean by 'wrap'? //Ed [1] http://bitworking.org/projects/atom/draft-ietf-atompub- protocol-13.html#media-link-entries
On 2/20/07, Bill de hOra <bill@...> wrote:
> Andrzej Jan Taramina wrote:
> >
> >
> > I'm getting a gut feel that Atom (both the format and the publishing
> > protocol) is going to be very useful and used for
> > many other things other than blog publishing. It has some very
> > interesting characteristics that make it a good fit for various
> > integration scenarios, amongst other things.
> >
> > Philosophically, it can almost be perceived as a simpler, easier, REST-
> > oriented alternative to SOAP.
>
> The format yes, and I've been saying this for some time. I expect SOAP
> stacks to start supporting Atom envelopes at some point, and then it
> will be game over.
>
>
> > I'm curious if anyone else is using/has used/ is considering ATOM and/or
> > APP
> > for other purposes, especially in the integration space?
>
> Here you go:
>
> Example:
> -----------
>
> [[[
> <atom:entry
> xmlns:iam="http://example.org/Schemas/Envelope"
> xmlns:event="http://example.org/event/"
> xmlns:atom="http://www.w3.org/2005/Atom">
> <atom:title>Envelope Seen</atom:title>
> <atom:link
> href="http://example.org/mid/000001103A48103500C000A8002E000A4E0E125C"
> />
> <atom:id>http://example.org/BizTalkInChannel/831bf016fad89f64f8411bb3c89a6ccb</atom:id>
> <atom:updated>2007-01-20T12:15:09Z</atom:updated>
> <atom:summary>Envelope seen on BT-IN channel</atom:summary>
> <event:EventSource>http://example.org/channel/in/BizTalkInChannel</event:EventSource>
> <event:EventLevel>http://example.org/event/level/info</event:EventLevel>
> <atom:category term="envelope" scheme="http://example.org/cat/"/>
> <atom:category term="event" scheme="http://example.org/cat/"/>
> <atom:content type="text/xml">
> ...
> </atom:content>
> </atom:entry>
> ]]]
>
> The Atom entry elements are populated as follows :
>
> * atom:title: not important for processing, can appear in monitoring UI
> * atom:link: not important for processing, provides a link to the
> event on the source system (this might result in a 404)
> * atom:id: unique identifier for the event. Important for duplicate
> detection.
> * atom:updated: the time the event was created at source
> * atom:summary: not important for processing, can appear in
> monitoring UI, might be useful in describing a system event.
> * atom:category: the 'term' attribute indicates the event type if
> the "scheme" attribute is 'http://example.org/cat/'. Other schemes are
> not defined.
>
> In addition, two extension elements are inserted into the entry
>
> * event:EventSource: provides a URI that names the source of the Event.
> * event:EventLevel: provides a URI that indcates the
> level/severity of the event.
>
>
> ========
> Event Levels and Types
> ========
>
> Event Levels
> ---------
>
> The level of an event is provided in the event:EventLevel extension
> element. Levels in the element take the form of URIs as follows:
>
> http://example.org/event/{LEVEL}
>
> The defined levels are
>
> * "trace": for testing
> * "debug": a debug message
> * "info": informative; also the default level for a message event
> * "warn": a warning, typically used in system events
> * "error": a error, typically used in system events
> * "critical": a severe event, typically used in system events
>
> Event Types
> -----------
>
> Use of atom:category
>
> The type of an event is provided in the atom:category element. Types use
> the 'scheme' and 'term' attributes. The 'scheme' attribute for a type is
>
> http://example.org/cat/
>
> Categorisation of events
> ------------
>
> An event entry contains 2 such atom:categories.
>
> First, all event entries contain an atom:category stating that they are
> a monitoring event, using a "term" attribute value of 'event' as follows:
>
> <atom:category term="event" scheme="http://example.org/cat/"/>
>
> Second, events are categorised as either system or mesage types, using
> "term" attributes value of 'system' and 'envelope' respectively:
>
> * System event:
>
> <atom:category term="system" scheme="http://example.org/cat/"/>
>
> * Message event:
>
> <atom:category term="envelope" scheme="http://example.org/cat/"/>
I've looked at serving Log4J events up as an atom feed, with something
remote to pull it back in. Right now I use the Log4J HtmlAppender to
serve up human only results. I also want to feed out test results of
various unit test runs. There is a patch to do this for Log4J, but it
uses Rome and JDom, which seems a bit of overkill for something that
is just trying to stream out XML I'd rather just generate atom pages
that survive system outage.
right now one inconvenience is that firefox 2 deliberately doesnt
listen to the CSS statements, so I cant format my feed in a way that
is nice for people on FF2.0 and good for machines. annoying.
-steve
On 21 Feb 2007, at 12:23, Edward Summers wrote:
> On Feb 20, 2007, at 5:53 PM, Henry Story wrote:
> > You don't even have to wrap what you publish using atom. See
> > section 5.3
> > http://bitworking.org/projects/atom/draft-ietf-atompub-
> > protocol-13.html#rfc.section.5.3
>
> You mean like when you are posting media resources? [1]
Yes. Your pointer is better than mine.
You could post rdf as a media resource for example, instead of
wrapping it in the atom content of an entry.
> > And neither does the server have to wrap what he receives in atom.
>
> When posting media resources the app server MUST respond with a link
> to the media link entry which is a URI for an atom:entry. Perhaps I'm
> missing what you mean by 'wrap'?
>
Yes. As the example shows:
[[
HTTP/1.1 201 Created
Date: Fri, 7 Oct 2005 17:17:11 GMT
Content-Length: nnn
Content-Type: application/atom+xml; charset="utf-8"
Location: http://example.org/media/edit/the_beach.atom
<?xml version="1.0"?>
<entry xmlns="http://www.w3.org/2005/Atom">
<title>The Beach</title>
<id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id>
<updated>2005-10-07T17:17:08Z</updated>
<author><name>Daffy</name></author>
<summary type="text" />
<content type="image/png"
src="http://media.example.org/the_beach.png"/>
<link rel="edit-media"
href="http://media.example.org/edit/the_beach.png" />
<link rel="edit"
href="http://example.org/media/edit/the_beach.atom" />
</entry>
]]
Here the content is at http://media.example.org/the_beach.png
So really the atom server is just creating metadata about
"...the_beach.png".
That metadata could be expressed in rdf using atom-owl btw:
[] a :Entry;
:title "The Beach";
:id "urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a"^^xsd:anyUri;
:updated "2005-10-07T17:17:08Z"^^xsd:dateTime;
:author [ a foaf:Person;
:name "Daffy";
];
:summary [ :type "text" ];
:content [ :type "image/png";
:src <http://media.example.org/the_beach.png>
].
(I left out a few statements)
So it is not xml or rdf formats that are important here. It is that
we expect the server to do certain things when it receives a POST or
a PUT. And this is why Atom is a great example of a RESTful web service.
And we could generalise from it. We could define certain well define
RDF relations which when we PUT or POST them to certain types of
containers, we expect certain things to be done. For example we might
want to define Shopping cart containers ( a subclass of :Collection
perhaps) which when we POST a document that is a BuyOrder does some
useful things like return us a URL of our Buy Order, which in RDF N3
would look like this
<> a :BuyOrder;
:product <http://sun.com/blackbox>
:amount 25;
:status "processed";
:author <http://bblfish.net/people/henry/card#me> .
So if you want you can find an xml crystalisation of the above rdf.
And then we can do this without requiring people to have rdf tools.
But since RDF tools are becoming more and more widespread, at some
point we won't even have to bother about the crystalisation.
see: http://blogs.sun.com/bblfish/entry/250_semantic_web_tools
Hope this shows why I think Atom is both a good example of a RESTful
service, and why it is not at all incompatible with RDF.
Henry
> //Ed
>
> [1] http://bitworking.org/projects/atom/draft-ietf-atompub-
> protocol-13.html#media-link-entries
>
>
On 2/21/07, Nic James Ferrier <nferrier@...> wrote: > "Steve Loughran" <steve.loughran.soapbuilders@...> writes: > > > right now one inconvenience is that firefox 2 deliberately doesnt > > listen to the CSS statements, so I cant format my feed in a way that > > is nice for people on FF2.0 and good for machines. annoying. > > Can't you just use XSLT? > well, I'm trying to serve up the same feed for people and machines. I could certainly XSLT it on demand, but I'd rather avoid the effort. It used to work, that is what annoys me the most.
Hi peeps This is not the exciting REST application I've been working on (that's held up as the search for funding goes down the toilet) but it is still quite interesting - I think. There are two problems with OpenID as far as I can see: - the authenticating user is directed away from the website they want to login to in order to login. I don't think many users will like or understand this. I don't think that many webstores (and other sites) will like this. - having a machine authenticate on your behalf (outside the browser) is difficult and requires new protocols to be supported. prooveme.com attempts to solve both these problems by giving each OpenID user a client certificate. Now when the user authenticates the auth happens immediately (the OP can say "does the user have the cert? yes or no?"). Users won't see any website change as they login (though they do see a page change obviously). prooveme.com can also help you get a machine to authenticate on your behalf. If you give your certificate to a machine then it can use the existing OpenID protocol to login... as long as the machine's HTTP client supports HTTPS and redirects. As an example, CURL does. My idea is to build a small amount of GUI into prooveme.com to allow users to generate additional certificates with time or login attempt constraints and to allow those certificates to be distributed to supporting clients. If anybody has any thoughts I'd be really interested to hear them. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
"Steve Loughran" <steve.loughran.soapbuilders@...> writes: > right now one inconvenience is that firefox 2 deliberately doesnt > listen to the CSS statements, so I cant format my feed in a way that > is nice for people on FF2.0 and good for machines. annoying. Can't you just use XSLT? -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
"Steve Loughran" <steve.loughran.soapbuilders@...> writes: > On 2/21/07, Nic James Ferrier <nferrier@...> wrote: >> "Steve Loughran" <steve.loughran.soapbuilders@...> writes: >> >> > right now one inconvenience is that firefox 2 deliberately doesnt >> > listen to the CSS statements, so I cant format my feed in a way that >> > is nice for people on FF2.0 and good for machines. annoying. >> >> Can't you just use XSLT? >> > > well, I'm trying to serve up the same feed for people and machines. I > could certainly XSLT it on demand, but I'd rather avoid the effort. It > used to work, that is what annoys me the most. I meant embed the XSLT in the feed: <?xml version="1.0"?> <?xml-stylesheet href="somexslt.xslt" type="application/xml"/> <feed ... </feed> your XSLT can render to HTML and that can pull in a CSS. No? -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On Wed, Feb 21, 2007 at 03:15:30PM +0100, Henry Story wrote: > On 21 Feb 2007, at 12:23, Edward Summers wrote: > > On Feb 20, 2007, at 5:53 PM, Henry Story wrote: > > > You don't even have to wrap what you publish using atom. See > > > section 5.3 > > > http://bitworking.org/projects/atom/draft-ietf-atompub- > > > protocol-13.html#rfc.section.5.3 > > > > You mean like when you are posting media resources? [1] > Yes. Your pointer is better than mine. (snip) > Yes. As the example shows: > > [[ > HTTP/1.1 201 Created > Date: Fri, 7 Oct 2005 17:17:11 GMT > Content-Length: nnn > Content-Type: application/atom+xml; charset="utf-8" > Location: http://example.org/media/edit/the_beach.atom > > <?xml version="1.0"?> > <entry xmlns="http://www.w3.org/2005/Atom"> I'm not getting your point. This is an Atom Media Link Entry. How does this demonstrate using something other than an atom format as the representation? > Here the content is at http://media.example.org/the_beach.png > > So really the atom server is just creating metadata about > "...the_beach.png". Well, yes, that's what an atom Media Link Entry is for. http://bitworking.org/projects/atom/draft-ietf-atompub-protocol-13.html#media-link-entries > That metadata could be expressed in rdf using atom-owl btw: I don't see where in the APP spec this is allowed. For example, at http://bitworking.org/projects/atom/draft-ietf-atompub-protocol-13.html#memuri it says: "Retrieval and updating of Member Entry Resources are done by exchanging Atom Entry representations." Similarly, everything else I can find in that document about representations refers to rfc4287. > So it is not xml or rdf formats that are important here. It is that > we expect the server to do certain things when it receives a POST or > a PUT. And this is why Atom is a great example of a RESTful web service. That sounds good, but either I'm misreading the APP spec, or it does in fact say that the Atom syndication format format must be used. Of course, nothing prevents you from building a REST application that follows the APP spec except that it doesn't use ASF for representations. Which might be the best application since sliced bread, but it wouldn't be APP... unless I'm mistaken. -- Paul Winkler http://www.slinkp.com
On 2/21/07, Nic James Ferrier <nferrier@...> wrote: > I meant embed the XSLT in the feed: > > <?xml version="1.0"?> > <?xml-stylesheet href="somexslt.xslt" type="application/xml"/> > <feed ... > </feed> > > your XSLT can render to HTML and that can pull in a CSS. > > No? I'll have to test that, to see if it sneaks past FF2.0 and IE7. -steve
"Steve Loughran" <steve.loughran.soapbuilders@...> writes: > On 2/21/07, Nic James Ferrier <nferrier@...> wrote: > >> I meant embed the XSLT in the feed: >> >> <?xml version="1.0"?> >> <?xml-stylesheet href="somexslt.xslt" type="application/xml"/> >> <feed ... >> </feed> >> >> your XSLT can render to HTML and that can pull in a CSS. >> >> No? > > I'll have to test that, to see if it sneaks past FF2.0 and IE7. It does work. It's one of the few things that are cross browser. Also works in Opera. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
On 21 Feb 2007, at 16:07, Paul Winkler wrote:On Wed, Feb 21, 2007 at 03:15:30PM +0100, Henry Story wrote: > > On 21 Feb 2007, at 12:23, Edward Summers wrote: > > > On Feb 20, 2007, at 5:53 PM, Henry Story wrote: > > > > You don't even have to wrap what you publish using atom. See > > > > section 5.3 > > > > http://bitworking.org/projects/atom/draft-ietf-atompub- > > > > protocol-13.html#rfc.section.5.3 > > > > > > You mean like when you are posting media resources? [1] > > Yes. Your pointer is better than mine. > (snip) > > Yes. As the example shows: > > > > [[ > > HTTP/1.1 201 Created > > Date: Fri, 7 Oct 2005 17:17:11 GMT > > Content-Length: nnn > > Content-Type: application/atom+xml; charset="utf-8" > > Location: http://example.org/media/edit/the_beach.atom > > > > <?xml version="1.0"?> > > <entry xmlns="http://www.w3.org/2005/Atom"> > > I'm not getting your point. This is an Atom Media Link Entry. > How does this demonstrate using something other than an atom format as > the representation? I was arguing that you don't have to wrap your content in an atom entry. The example above taken from the spec shows that media entries are not wrapped. It is just returning metadata about the posted content. > > Here the content is at http://media.example.org/the_beach.png > > > > So really the atom server is just creating metadata about > > "...the_beach.png". > > Well, yes, that's what an atom Media Link Entry is for. > http://bitworking.org/projects/atom/draft-ietf-atompub- > protocol-13.html#media-link-entries > > > That metadata could be expressed in rdf using atom-owl btw: > > I don't see where in the APP spec this is allowed. > I am saying *could* in the sense of technically it would not have been a silly idea nor a difficult to comprehend one. The argument initially was that the semantic web was incompatible with RESTful web services. I just showed how that is nonsense, since you can think of the atom xml as just a representation of an rdf graph. > For example, at > http://bitworking.org/projects/atom/draft-ietf-atompub- > protocol-13.html#memuri > it says: "Retrieval and updating of Member Entry Resources are done by > exchanging Atom Entry representations." > > Similarly, everything else I can find in that document about > representations refers to rfc4287. > > > So it is not xml or rdf formats that are important here. It is that > > we expect the server to do certain things when it receives a POST or > > a PUT. And this is why Atom is a great example of a RESTful web > service. > > That sounds good, but either I'm misreading the APP spec, or it does > in fact say that the Atom syndication format format must be used. Whether it does or does not is not the question here in my mind. It would not have been any technical problem to do it in rdf. It certainly would have made the extensibility of atom more predictable. Currently you have to search around for rfcs to get an idea of what is meant by any atom extensions. If you ever find them, you then have to read those rfcs very carefully to be able to work out what they may possibly mean. Semantic Web technologies would have made the understanding of atom extensions a lot easier. It would probably have made the atom protocol itself better. Currently one is forced to fill the atom entry with a number of dummy fields (such as the required id) even when they are never needed or are completely determined by the server. There are a few other areas where better semantic definitions in atom would have helped. Btw. it was part of the Atom working groups aim to provide an ontology initially. > Of course, nothing prevents you from building a REST application that > follows the APP spec except that it doesn't use ASF for > representations. Which might be the best application since sliced > bread, but it wouldn't be APP... unless I'm mistaken. Well I am happy to hear you say that RESTful applications using RDF could be cool. After all RDF stands for Resource Description Framework. REST stands for Representation State Transfer. Web Architecture states that Resources have Representations... So the two are pretty tightly related. In short as Bill argues in another thread there are advantages with an simple xml format such as atom over rdf, such as there being more people used to DOM and XSLT tools, and those being more battle tested. But that is becoming less the case. XML crystalisations are the way to get the best of both worlds, but they are more difficult to engineer. It may just be a matter of both communities understanding each other better for this to work itself out.... Henry > > -- > > Paul Winkler > http://www.slinkp.com > >
Has anybody here done anything serious with NetKernel's "REST micro-kernel"? What did you think? I know Steve is connected to the company, and Hugh and Bill have at least played around. Their opinions are interesting, but real independent deployed apps would be even more interesting. Available on the Web would be even better.
--- In rest-discuss@yahoogroups.com, Edward Summers <ehs@...> wrote: > > Funny, I'm just now in the process of formulating a simple > demonstration of how to manage bibliographic metadata with APP. Has > anyone else documented their use of APP for non-blogging applications > anywhere? I have run across gdata and pipes--what are the other > prominent implementations? > > //Ed > Not an implementation--yet--just a recommendation to implement: Intelligence Community Moving Toward Atom <http://www.furl.net/item.jsp?id=16561514> . -- Nick
Unfortunately, the link seems to be private (not the one you sent, the one referenced there). Stefan On Feb 21, 2007, at 6:51 PM, Nick Gall wrote: > > --- In rest-discuss@yahoogroups.com, Edward Summers <ehs@...> wrote: > > > > Funny, I'm just now in the process of formulating a simple > > demonstration of how to manage bibliographic metadata with APP. Has > > anyone else documented their use of APP for non-blogging > applications > > anywhere? I have run across gdata and pipes--what are the other > > prominent implementations? > > > > //Ed > > > Not an implementation--yet--just a recommendation to implement: > Intelligence Community Moving Toward Atom. -- Nick > >
On 2/21/07, Bob Haugen <bob.haugen@...> wrote: > Has anybody here done anything serious with NetKernel's "REST micro-kernel"? > > What did you think? > > I know Steve is connected to the company, and Hugh and Bill have at > least played around. Their opinions are interesting, but real > independent deployed apps would be even more interesting. I have no financial affilations to them at all. They just happen to be friends of mine who host my blog. The original netkernel was an ex-HPLabs project, which got killed for various reasons; the NetKernel team got to take the idea, all the IP and the code; what you get now is a complete rewrite. Writing something serious with NK is still on my very-long todo list; what it can do is bridge a database to the net without going through the Java EE layer, which makes sense. Why do an R/O then an O/X mapping, when you can go straight from R to X? I think the big barrier to me sitting down and doing so is the usual one: time. I always get the feeling that NK works best if you understand the X-* spec world, XPath, XSL, XForms, etc. and frankly, my knowledge of all of these is patchy to say the least. The part of my brain that can cope with specifications has been filled up with WS-* and the Java EE specs, and I need to slowly let those facts decay before I can fill them with more useful things. At the same time, I think its one of interesting ways of viewing and working with data. It and yahoo pipes could connect together in interesting ways. It's certainly made a bigger jump than say, Ruby has, or rest annotations for Java. But revolutions are often harder to take up than evolution. If/when I play with it more I will describe it without too much bias. But this week's spare time is being spent learning the R language to do analysis of six months worth of passers by bluetooth data. -steve
OK...we all know that BPEL engines typically consume WSDL and SOAP. Anybody know of any similar beasties that do orchestration in the REST world? That is, BPEL engines that can do resource manipulation/orchestration across the usual HTTP verbs? Thanks! Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
On 2/23/07, Andrzej Jan Taramina <andrzej@...> wrote: > Anybody know of any similar beasties that do orchestration in the REST world? > That is, BPEL engines that can do resource manipulation/orchestration across > the usual HTTP verbs? Do you mean BPEL literally, or any way to do orchestration or choreography RESTfully? The w3c Choreography group, which I was part of for awhile, started out using pi-calculus as its basis, which I think has some similarity to REST. But of course they did not end up there...
Do the folks who designed this get REST? http://framework.zend.com/manual/en/zend.rest.server.html
On Feb 23, 2007, at 6:28 PM, sanatgersappa wrote: > Do the folks who designed this get REST? > > http://framework.zend.com/manual/en/zend.rest.server.html > > > > To call a Zend_Rest_Server service, you must supply a GET/POST > method argument with a value that is the method you wish to call. > You can then follow that up with any number of arguments using > either the name of the argument (i.e. "who") or using arg following > by the numeric position of the argument (i.e. "arg1"). No. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On 2/23/07, Stefan Tilkov <stefan.tilkov@...> wrote: > On Feb 23, 2007, at 6:28 PM, sanatgersappa wrote: > > Do the folks who designed this get REST? > > > > http://framework.zend.com/manual/en/zend.rest.server.html > > > > > > > > > To call a Zend_Rest_Server service, you must supply a GET/POST > > method argument with a value that is the method you wish to call. > > You can then follow that up with any number of arguments using > > either the name of the argument (i.e. "who") or using arg following > > by the numeric position of the argument (i.e. "arg1"). > No. It's accidentally RESTful for "get me this data" operations that happen to use GET, and for mutating operations that happen to use POST. But the clients would be unnecessarily tightly coupled to servers, and it otherwise doesn't seem to help with resource-orientation. http://www.markbaker.ca/blog/2005/04/14/accidentally-restful/ Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
On Wed, 2007-02-21 at 09:39 +0000, Bill de hOra wrote: > Hugh Winkler wrote: > > On 2/20/07, Bill de hOra <bill@...> wrote: > >> The last part by the way > >> about complete document semantics simply isn't true if your > document > >> semantics are based on model theoretic semantics - the agents > >> communities have been slinging documents back and forth for years > >> without been tied to runtimes. It's just that programming to a > model > >> theory is esoteric compared to switch-on-type; programmers > generally > >> won't roll with it. > > Well, yeah. We really need to find a way to make it practical to do > > reasoning over these tags. It's hard because: there are lots of > > definitions, the definitions are not centrally located, and even if > > you get them all in one place, reasoning over them is a massive > > program in itself. So I think to make it practical, programs have to > > call a service to do it for them. > No argument there. I'd love to read a nutshell analysis as to why > people > aren't falling over themselves for those kinds of technologies (kr, > rules, agents, semweb) - they directly address multi-billion dollar > issues in the industry (specifically integration and change > resilience). > I've always seem application protocols as the "software agent" > equivalent of grunting. Let me have a quick go at debunking RDF. I have put a few years thought into this. While I know that what I am about to say works directly against deep assumptions of current semantic web proponents, I believe it is grounded in reality. I also believe that we need to face these issues and come up with some good answers in order to actually achieve the semantic web. Firstly, let me define the semantic web as I see it: The semantic web is a software architecture in which architecture components can initiate or otherwise be involved in standard interactions such as GET requests with corresponding responses. These interactions transfer data from one component to another in standard forms that can be understood, correctly interpreted, and used for the purposes of the consuming component. Because standard interactions are in use, it is possible to configure each component to talk to just about any other component that describes its data in the same way. The data description comes in three parts: * Document type * Vocabulary * Structure You might recognise my semantic web as being the same as the RESTful web. I see the two as strongly correlated. >From a RESTful perspective we are interested in using standard document types, which implies the use of standard vocabulary and structure. RDF decouples structure from vocabulary and perhaps from document type, suggesting that many document types on the web should use one of its serialisations. Vocabulary should be mapped onto the graph so that it can arbitrarily aggregated without loss of information, then later queried. I challenge the effectiveness of RDF on a number of points * The effectiveness of the graph structure for conveying data machine to machine * The importance of aggregation at the graph level * The mechanisms for seeing vocabulary evolve, and for mixing vocabularies Graph Structure Effectiveness When we are talking about pure machine-to-machine integration I see the end to end process as follows: * Machine 1 has information in an internally-defined structure * Machine 1 encodes this information into a representation * Machine 2 recieves the representation * Machine 2 decodes the representation into its own internal structures In some cases the internal structure will literally be the representation, ie a string. In other cases the gap between internal structure and representation will be wider. In very few cases do I think we will see an RDF-like graph as the internal structure on either side of this communication. In other words, the graph does not add value to machine to machine communications. Higher-level structured XML documents have proven themselves as more effective. It is easier to encode information to or extract information from an atom document than from the equivalent RDF. RDF requires more complex model-to-model transformations than the easy-to-traverse tree structure of XML. RDF imposes an unnecessary burden on both sides of the information exchange that results in more code being written, rather than less. Graph-level Aggregation The core selling point of RDF seems to be the ability to aggregate arbitrary information into a single document. In the machine-to-machine example this has no value because the additional information won't be understood by the second machine and will be ignored. However, if we throw the data into an RDF triplestore we might be able to extract it later using appropriate SPARQL or other queries. This is really the core use case, I think, of RDF. I go and crawl the web and aggregate data that I can later run queries on. In other words, it is a way for the google of the semantic web to learn "everything" and to have queries run against it. While this might be possible, it relies on a controlled set of vocabularies being defined. As I will point out later in the document, I don't think RDF is as conducive to good vocabulary evolution as XML. The use case is also limited. It doesn't really impact on the likelyhood that arbitrary components of the architecture will be able to have a meaningful conversation. It just allows particular kinds of search. We can see this with early RSS. RSS was defined in terms of RDF so that it could be easily aggregated. However, aggregation did not happen at the RDF level in practice. Instead, RSS was aggregated at a higher level. Aggregation and Evolution I am not a fan of XML namespaces. We have a MIME type that defines what kind of document we are parsing. My inclination is usually to ignore any XML namespace and just rely on the mime to get me home. Some documents include sub-documents. For example, atom can include xhtml for use in some elements. The containing element indicates to anyone who can process atom what they should be doing with the content, regardless of its type. In this case it is an XML namespace that selects the parser for use in this sub-document. A more uniform scheme would be preferrable. As well as allowing targetted aggregation, XML can be subclassed. Must-ignore semantics mean that a document with additional elements will be ignored by old implementations. This allows new versions of the document type to be deployed without breaking the architecture. It also allows extensions to be added for various purposes. If we continue to use mime we can be specific about particular kinds of subclasses. For example, I might sub-class atom for the special purpose of indicating the next three trains that will arrive at a railway station: application/pids+atom+xml. RDF isn't really as flexible. You can include foreign statements for aggregation, but you can't easily control what they mean in your context. RDF statements are intended to be context-free, but even very generic statements like dc:author are likely to need some assumptions to be made in order to interpret them correctly. They are not part of the parent vocabulary so aren't really part of the parent document type. If you want to subclass a vocabulary you are looking at defining your own vocabulary in its own namespace that extends the original one. While this might be ok initially, it is as vocabularies evolve that you run into trouble. Once the wider community around a particular vocabulary sees my extensions as valuable, how do they move into the standard sphere? Do they need to keep the namespace in which they were first introduced? There is an argument to be made[1] that whenever the architecture demands that you constrain something (methods, document vocabulary, document types) that providing an infinite namespacing scheme actually helps different communities avoid conflict and face-offs that they should be having. If html had made it easy for netscape and microsoft to put their extensions into their own namespaces, would html still be the strong standard that it is today? Would we have to remember whether it was microsoft or netscape who implemented the blink tag in order to include it into our documents? Namespaces should be controlled when the architecture demands constraints in the area they govern. Additionally, the argument is there to continue using the main document namespace for any extensions rather than introducing a new namespace, whether they have been ratified by anyone or not. Sub-classing the mime type is a good way to avoid namespace conflict in the short term, then as vocabulary and structure move from the special to the general and back again we don't have to keep track of exactly where terms originated from. Conclusion RDF does not seem to have the vocubarly evolution mechanisms that XML content types have available to it. RDF overemphasises the low-value graph model, and undempasises high-value problem-specific structure. I think that the semantic web will not be constructed of a single abstract model, but one that is built up of solutions to various specific problems. I see a semantic web of document types that include a little html, structure themselves around atom, and add a number of other document types into a single structure for good measure. I see a bounded number of actual document types that are built up in this way so that components of the architecture can understand messages that are sent to them, rather than just aggregating their data. I see the semantic web in terms of machine to machine integration, rather than data to database aggregation. Benjamin. [1] http://www.mnot.net/blog/2006/04/07/extensibility
On 2/20/07, Andrzej Jan Taramina <andrzej@...> wrote: > > I'm curious if anyone else is using/has used/ is considering ATOM and/or APP > for other purposes, especially in the integration space? > > We're using it to facilitate dynamic (runtime) service discovery - if you have many instances of a particular service, how do you find out what's available and then select one to invoke. Service instances push service context documents to feeds (each service has its own entry in the feed that it updates as its context changes). Consumers can query the feeds to find service instances. We also aggregate feeds, but its a push model rather than the pull model that most feed aggregators use. We're thinking about expanding the model to include subscriptions to feeds (in the jms/queuing sense) so we dont have to statically configure who changes get pushed to (I thought I'd run across an rfc for this). And we're also thinking about what we might learn if we tried to apply some of the concepts from some of the WS-* security stuff to Atom. --Chuck
Would it be possible to use the Zend framework and/or PHP RESTfully, in a resource-oriented style, by ignoring their so-called Rest Server?
On 2/24/07, Bob Haugen <bob.haugen@...> wrote: > Would it be possible to use the Zend framework and/or PHP RESTfully, > in a resource-oriented style, by ignoring their so-called Rest Server? Of course. I've got important services in my SOA running as PHP scripts. I use Apache and mod_rewrite fronting it, and PHP5 running as FastCGI instances to make it lightningly fast. Works like a charm. I was earlier looking at the Zend_rest thing, but left in disgust as they clearly had no idea; they just wrap RPC in request parameters. I've been thinking for a while to get back to them and see if this could be fixed making it truly restful (because other parts of the Zend framework is really good, and I use it a lot), but there are other concerns, such as the lack of concurrent unsyncronized requests with the underlying Zend_http (which Zend_rest relies on). I'm thinking of writing my own implementation (possibly next week) and possibly donate some code. Unfortnuately for me, this means work. :) Kind regards, Alexander -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
Thanks for this - I've looked at OpenID and felt it was a browser-UI-only based solution and not one for automation APIs. I don't have any immediate plans to incorporate OpenID in my new company (www.othersonline.com if anyone is interested) but will keep my eyes open. > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Nic James Ferrier > Sent: Wednesday, February 21, 2007 6:03 AM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] ANN: prooveme.com OpenID provider > > Hi peeps > > This is not the exciting REST application I've been working > on (that's held up as the search for funding goes down the > toilet) but it is still quite interesting - I think. > > There are two problems with OpenID as far as I can see: > > - the authenticating user is directed away from the website they want > to login to in order to login. > > I don't think many users will like or understand this. I don't think > that many webstores (and other sites) will like this. > > - having a machine authenticate on your behalf (outside the browser) > is difficult and requires new protocols to be supported. > > prooveme.com attempts to solve both these problems by giving > each OpenID user a client certificate. Now when the user > authenticates the auth happens immediately (the OP can say > "does the user have the cert? > yes or no?"). Users won't see any website change as they > login (though they do see a page change obviously). > > prooveme.com can also help you get a machine to authenticate > on your behalf. If you give your certificate to a machine > then it can use the existing OpenID protocol to login... as > long as the machine's HTTP client supports HTTPS and > redirects. As an example, CURL does. > > My idea is to build a small amount of GUI into prooveme.com > to allow users to generate additional certificates with time > or login attempt constraints and to allow those certificates > to be distributed to supporting clients. > > If anybody has any thoughts I'd be really interested to hear them. > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk for all your tapsell ferrier needs > > > > Yahoo! Groups Links > > >
"S. Mike Dierken" <dierken@...> writes: > Thanks for this - I've looked at OpenID and felt it was a browser-UI-only > based solution and not one for automation APIs. This is the part about prooveme.com that I find really interesting. But to make it work I think we're going to need new APIs to make provisioning certificates easier. For example, if I want to grant the right to flikr to upload photos to my blog, I need to give flikr a certificate which allows it to do that, and only that. Which is why, in part, I posted it here. I think those APIs should be RESTfull (obviously). I hope to involve you guys in that. -- Nic Ferrier http://www.tapsellferrier.co.uk for all your tapsell ferrier needs
"Bob Haugen" <bob.haugen@...> wrote: > Do you mean BPEL literally, or any way to do orchestration or > choreography RESTfully? > > The w3c Choreography group, which I was part of for awhile, started > out using pi-calculus as its basis, which I think has some similarity > to REST. But of course they did not end up there... Ah, pi-calculus! It has been a while since I've seen someone utter that term. In my prior life at M$, we did a lot of research on applying it to writing applications. While that experience formed the design of how our REST framework (MindTouch Dream) pipes requests through the system, I can't say it's very pi-ish. Pi's ultra-focus on discrete events made it a pain working with sequences of messages and open-ended streams of information. Unfortunately, these are more often the norm than the exception. If you're familiar with pi, you might enjoy reading about Cues in Dream. They are akin to ports with sessions. Cheers, - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org [1] http://doc.opengarden.org/Articles/Cues
Bob asks: > Do you mean BPEL literally, or any way to do orchestration or > choreography RESTfully? I meant the latter, declarative tools to do orchestration/choreography of resources RESTfully.... Anyone know of any such tools? Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
On 2/24/07, Steve G. Bjorg <steveb@...> wrote: > In my prior life at M$, we did a lot of research on > applying [pi-calculus] to writing applications. While that experience formed the > design of how our REST framework (MindTouch Dream) pipes requests > through the system, I can't say it's very pi-ish. Pi's ultra-focus on > discrete events made it a pain working with sequences of messages and > open-ended streams of information. Unfortunately, these are more > often the norm than the exception. I did not have the whole of pi-calculus in mind when I made my rash invocation. Just that in my limited understanding, pi-calc is similarly stateless. The pi-calc jocks in the w3c choreography grp thought it wd be possible to do choreography on the web in a stateless manner (using something analogous to the REST definition of stateless). That ended up being a bit too radical for the eventual group consensus. I see they are still at it: http://lists.w3.org/Archives/Public/public-ws-chor/ This group is implementing the resultant WS-CDL specs apparently in a more pi-calc style: http://www.pi4tech.org/tiki-index.php I'm way out of touch, but I don't think that stuff ended up being RESTful in any sense of the word.
Benjamin Carlyle wrote: > In other words, the graph does not add value to machine to machine > communications. Higher-level structured XML documents have proven > themselves as more effective. It is easier to encode information to or > extract information from an atom document than from the equivalent RDF. > RDF requires more complex model-to-model transformations than the > easy-to-traverse tree structure of XML. RDF imposes an unnecessary > burden on both sides of the information exchange that results in more > code being written, rather than less. I'm not sure what you mean, but it doesn't jibe with me. I think I would disagree. > Graph-level Aggregation > The core selling point of RDF seems to be the ability to aggregate > arbitrary information into a single document. In the machine-to-machine > example this has no value because the additional information won't be > understood by the second machine and will be ignored. However, if we > throw the data into an RDF triplestore we might be able to extract it > later using appropriate SPARQL or other queries. Again I don't know what you mean. What's "machine" insofar as giving it a data fragment is assured to be pointless? That would seem to be to be anything written that is designed not to understand RDF, which is kind of circular. If you're saying "there's no point emitting RDF because nothing downstream can treat it as RDF", then I would agree with you: - statistically speaking nobody deploys code to process RDF as RDF, and - definitely nobody writes code that does much more than switch on type unless they are mapping to local domain entities (eg 'User'), and, - if they are doing that, than received XML is easier to map in because XML is equally as inflexible. But if you have RDF on both ends, you don't tend have model to model mappings (ie the entire economic basis of the systems integration business). Where you do map, you can do so so with much more precision. I think the thing to bear in mind here is that most IT systems don't have "models" in the RDF sense. In RDF a model is more like the formal definition of a programming language than an object written in that language. Real interchange with RDF assumes a shared runtime - it's the Web's version of the .Net CLR. > As I will point out later in the document, I > don't think RDF is as conducive to good vocabulary evolution as XML. XML isn't conducive to vocabulary evolution either. This is very strange juxtaposition. Most XML vocabulairies I've seen that declare an extensibility based end up defining a subset of what RDF defines. [I would say you and I are going to disagree on extensibility at some point. lets' get that out of the way. For example, what the Atom Format does in its extension model is what I call "modularity". What RDF does is more akin to what Smalltalk and Lisp people might consider to be "extensibility". You should also know that I think RDF has as about much to do with XML as ECMAScript has to do with JSON. They're not even both fruit.] > RSS was defined in terms of RDF so that it > could be easily aggregated. However, aggregation did not happen at the > RDF level in practice. Instead, RSS was aggregated at a higher level. But you don't say why that was. Why was that? > As well as allowing targetted aggregation, XML can be subclassed. Some of the worst thinking around XML is down to this - in the form of inheritance, type inference, acquisition, implicit values, and making overreaching assumptions about what element containment means. It's always resulted in a mess because it assumes in a processing model that is rarely written down. > Must-ignore semantics mean that a document with additional elements will > be ignored by old implementations. mI in my mind is about having having a trailing "else" in the code that logs to disk instead of throwing an exception. It's a sensible programmatic default. > This allows new versions of the > document type to be deployed without breaking the architecture. It also > allows extensions to be added for various purposes. If we continue to > use mime we can be specific about particular kinds of subclasses. For > example, I might sub-class atom for the special purpose of indicating > the next three trains that will arrive at a railway station: > application/pids+atom+xml. > > RDF isn't really as flexible. I can't agree. RDF's handling of unknown triples is far more flexible than mI. Too flexible for most programmers in fact. As for subclassing, how would I know what you're doing with MIME specializations unless I baked that knowledge into my code? RDF at least has class based inference built in. > Conclusion > > RDF does not seem to have the vocubarly evolution mechanisms that XML > content types have available to it. Sure. RDF has one vocabulary evolution mechanism, and all the XML vocabularies have on of their own. [aside: it's weird to watch people argue up the uniform interface as a key constraint of REST, but happily rail on uniform data. ] Here's the problem with RDF and its style of information architecture - it's being pushed on a world that isn't ready for it. It's not wrong, it's impractical. There's a hierarchy of needs, which for most people means that RDF can't be a practical concern right now. We have to more immediate, base concerns, like encoding, and tags, and addressing, and translations, and indexing, and term search, and malformedness. The W3C missed this point for the better part of a decade, by assuming that syntax didn't matter, whereas for most people syntax is everything. The consortium telling developers in these times that syntax doesn't matter is like Marie-Antoinette telling people to eat cake. The web will evolve to support interlingua of the kind the W3C thinks we should have now; the success of application protocols clearly indicate that speech act style communications do work, and short of a paradigm shift, that suggests interlingua very likely critically important to the future of distributed systems. But the future where they are generally available and used is years off. Broad use of horseless carriages implies a good enough road infrastructure gets built out in advance. cheers Bill
Hi Mike, Sorry it took me so long to answer this. I've been busy, busy, busy and couldn't find the time until now. On Feb 7, 2007, at 1:19 AM, Mike Schinkel wrote: > Bill Venners wrote: >> Perhaps I misunderstood you. I agree with your aesthetic >> sense that paths are prettier than queries in URIs. But I >> think that both path and queries are needed, so sometimes you >> will have query parts. The question I was asking is which >> form of embedding query params in URIs might they be the most >> pretty and user friendly? >> >> http://www.artima.com/articles?o=a&t=java&p=7 >> >> Is the traditional way. But: >> >> http://www.artima.com/articles;a,tjava,p7 >> >> or >> >> http://www.artima.com/articles~a,tjava,p7 >> >> Could also be used in our architecture. I'm not sure that >> they are much prettier than the traditional query form, but >> the latter forms are shorter. > > Can you give some use cases where queries are *needed* (beyond one > query > parameter?) > > I'm not disagreeing, just wanting to see your use cases. > First of all I ended up deciding to stay with the traditional query param form: articles?a=t&t=java&p=7 instead of: articles;a,tjava,p7 even though the latter form yielded shorter and, in some eyes perhaps, slightly less ugly URLs. The reason is that whenever you have a form whose method is GET, the query that the browser will compose will use the traditional form. I definitely forsee having some such forms--I want to have a search box on every page, for example--so the option is to either 1) support both kinds of query params, 2) redirect every time the traditional form to the short form, or 3) put JavaScript in each page that contains such a form that captures the submit and rewrites the request URL in the browser before submitting. Amazon's a9.com search box actually uses number 3, whereas all the other major search engines I looked at just use the traditional form. Compare a9's URL for a search for "dogs": http://a9.com/dogs To Google's: http://www.google.com/search?q=dogs Or Yahoo's: http://search.yahoo.com/search?p=dogs Now, to answer your real question about whether query parameters are *needed*, I suspect the answer is technically no, but practically yes. I think you could likely find a way to cram whatever info you wanted into a URL path. Amazon did a nice job of sticking the search query in as the path above. The trouble is when you have a multi- dimensional matrix of pieces of info you want to pass in via a URL, and it just doesn't fit well in a plain old hierarchy. With query params you can just leave a query parameter out and it means that param has its default value. If you're only using path, you'll end up having one path element for each param, and a value that means null, but takes up space in the URL. Not a crisis, perhaps, but kind of funky. In my self-imposed requirements, I'd like the path portion to help users intuit the information architecture of the site, so I'd like to leave query-param-style info out of the path. I want to use path rather than query param whenever I can, but I find cases where I need query params. For a specific use case, I want every subpath under / articles to be an article, so: /articles/why_put_and_delete and /articles/url_design would be articles. But I also want people to be able to look at the list of articles at /articles based on topic. There was I think a suggestion on this list to put the list of articles that are just about Java at: /articles/java Trouble is that it overloads the meaning of the subpath. Sometimes it means an article, and sometimes a topic (such as Java), and that's confusing for users trying to figure out the info architecture with the URLs. Moreover, what if later we wanted to add a "URL Design" topic. If we tried to put that at /articles/url_design, it wouldn't work because there's already an article there. Because I'm trying to place two different concepts (article and topic) in the same subpath position, each of which has its own namespace, the namespaces can collide. If I try and put both things in the path, I end up with: /articles/java and /articles/x/url_design Where I've chosen x to mean null or unused. I think that would work, but what we've done is crammed a query param-like piece of information into the path. Instead I use a query param to pass in the topic, as in; /articles?t=java And an article would also not need anything to indicate there's no topic other than not mentioning a query param: /articles/url_design Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
On 2/24/07, Bill de hOra <bill@...> wrote: > [aside: it's weird to watch people argue up the uniform interface as a > key constraint of REST, but happily rail on uniform data. ] Maybe its because we havent seen the benefits of uniform data yet. What we have today with XML is uniform structure, which has enough of an infrastructure that you have xml editors, xsl engines, xpath viewers, all of which use that structure. These tools help embed XML into the world (people expect XML formats for everything) while not actually helping users once you get beyond the structural basics, unless they are knee-deep in application specific code (i.e. Ant-aware editors that know about property settings, targets, <import> etc) > > > Here's the problem with RDF and its style of information architecture - > it's being pushed on a world that isn't ready for it. It's not wrong, > it's impractical. There's a hierarchy of needs, which for most people > means that RDF can't be a practical concern right now. We have to more > immediate, base concerns, like encoding, and tags, and addressing, and > translations, and indexing, and term search, and malformedness. The W3C > missed this point for the better part of a decade, by assuming that > syntax didn't matter, whereas for most people syntax is everything. The > consortium telling developers in these times that syntax doesn't matter > is like Marie-Antoinette telling people to eat cake. > > The web will evolve to support interlingua of the kind the W3C thinks we > should have now; the success of application protocols clearly indicate > that speech act style communications do work, and short of a paradigm > shift, that suggests interlingua very likely critically important to the > future of distributed systems. But the future where they are generally > available and used is years off. Broad use of horseless carriages > implies a good enough road infrastructure gets built out in advance. If you look at the current generation of languages, they can just about handle hash tables, either in-language (python, ruby) or as garbage-collected object types (Java, C#). Thats the limit of the language's structure, or simple lists. The old sapir-whorf hypothesis kicks in and people stop trying to express things the language makes hard. Or they start off with simple XML and evolve it over time to something that represents a graph, without a graph centric data format (Ant build files, RPC/encoded SOAP, that new Systems Modeling Language, etc). From an RDF-purist perspective, this is wrong, but for a developer, slowly evolving things, it works. What is more, being XML is a selling point. Whereas being RDF? Nobody cares right now. On my todo list for the next few months is to do a better way of presenting test results; XML or XHTML under Atom is my likely choice, but I may allow the jena team to propose an alternative if they can come up with something that works. Where "works" doesnt just mean "RDF inside an Atom feed", or even "extensible representation of results of distributed and cross platform tests" but "produces output that can be easily interpreted". Its a lot easier to tell people to pump stuff through an XSL engine or view in firefox than suggesting a facet-viewer -even if the latter is more powerful. -steve
Steve Loughran wrote: > On my todo list for the next few months is to do a better way of > presenting test results; XML or XHTML under Atom is my likely choice, uXUnit? cheers Bill
Wow, what a thread. I'll respond at greater length once I've re-read a couple of times and thought a bit... but there is one point I can pick up on right away, from Benjamin: [[ I challenge the effectiveness of RDF on a number of points * The effectiveness of the graph structure for conveying data machine to machine ]] The Web is a graph structure. One perspective is that hyperlinks contained in documents (representations) express relationships between resources. The relationships are generally untyped, and are only really become useful when the interlinked documents are connected with the verbs of HTTP. From this perspective, the graph structure isn't in itself used for conveying data machine to machine - it just *is*, in the declarative sense. While RDF documents can contain graphs, these graphs can be viewed conceptually as little pieces of the Semantic Web, cached in the document. RDF adds to the web's graph structure is a means of typing relationships, with the relationships being resources in their own right (properties in vocabularies). This brings in a dimension that's orthogonal to the documents - the resources and relationships offer a low-level data model. While on the one hand a web of linked data is a new idea, RDF could also be seen as a fairly degenerate entity-relationship model, the kind of thing programming languages have been manipulating for years. Having said all that I'm not sure this perspective is the one which indicates the most immediate gains from using RDF, but I do think it's an important one. Cheers, Danny. -- http://dannyayers.com
Bill Venners wrote: > would be articles. But I also want people to be able to look at the > list of articles at /articles based on topic. There was I think a > suggestion on this list to put the list of articles that are just > about Java at: > > /articles/java Or you could use "/topics/java", which makes more sense to me. K.
On 2/26/07, Bill de hOra <bill@...> wrote: > Steve Loughran wrote: > > > On my todo list for the next few months is to do a better way of > > presenting test results; XML or XHTML under Atom is my likely choice, > > uXUnit? No witty name at all. Its intended to be framework neutral, though that gets complex once you start looking at partial successes like "succeeded but took too long" from a performance tool, or "succeeded with multiple valid results" (Prolog unit tests), or even "don't know". I also want to collect as much system state as possible, so you can have the machines work out what tests/systems that are failing have in common. The results would get served up as atom feeds under various tags/labels, so you could subscribe to all failing tests assigned to me http://server/tests/failing/steve every test would come with description, output, etc. Some people want to add flash videos or vmware images, which could be served alongside.
Hi all, I have been messing around with a framework idea for Java for the past 4 months and finally have been able to make it publicly available. It's called RESTEasy, and you can read more about it here: http://resteasy.damnhandy.com/ At present, it's very much a work in progress and has many rough edges and typos. It utilizes JAXB2 quite heavily and integrates very nicely with JPA and Hibernate 3. It should be noted that JAXB will not be a requirment. Curious to hear anyones thoughts. Ryan-
Ryan wrote: > I have been messing around with a framework idea for Java for > the past 4 months and finally have been able to make it > publicly available. It's called RESTEasy, and you can read > more about it here: > > http://resteasy.damnhandy.com/ > > At present, it's very much a work in progress and has many > rough edges and typos. It utilizes > JAXB2 quite heavily and integrates very nicely with JPA and > Hibernate 3. It should be noted that JAXB will not be a > requirment. Curious to hear anyones thoughts. > How is this an improvement over RESTlet[1]? Does this mean we're going to see lots of incompatible REST architecture implementations? Didn't Python learn what a problem lots of incompatible implementations bring and and introduce WSGI to try to stem the bleeding? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..." [1] http://www.restlet.org
Mike Schinkel wrote: > How is this an improvement over RESTlet[1]? Does this mean we're going to > see lots of incompatible REST architecture implementations? I certainly hope so. Only by trying a lot of different things can we see what works. Furthermore, I see no particular reason to standardize on the server. The interface between clients and servers is standard, but not the interfaces by which servers communicate on their own system. If we bless one true architecture for serving REST, we will ultimately harm interoperability since we'll almost certainly bake assumptions into our code that are only true of that one local framework. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Elliotte Harold wrote: > Mike Schinkel wrote: > > > How is this an improvement over RESTlet[1]? Does this mean > we're going > > to see lots of incompatible REST architecture implementations? > > I certainly hope so. Only by trying a lot of different things > can we see what works. Variety for variety's sake is NOT good, IMO. That's why I asked what was the difference. > Furthermore, I see no particular reason to standardize on the server. > The interface between clients and servers is standard, but > not the interfaces by which servers communicate on their own system. Have you ever looked at Python's WSGI? > If we bless one true architecture for serving REST, we will > ultimately harm interoperability since we'll almost certainly > bake assumptions into our code that are only true of that one > local framework. I don't see how a significant number of arbitrarily different implementations can ever help interoperability. I can only see how it can harm it. Don't get me wrong; I'm not saying there can be only one (unless you are an immortal, but I digress...) What I am saying is that a bunch of implementations whose difference are based purely on arbitrary happenstance is nothing more than ego-gratification for the developers and not at all good for evolving interoperable solutions. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
Hi Keith, On Feb 26, 2007, at 2:23 AM, Keith Gaughan wrote: > Bill Venners wrote: > >> would be articles. But I also want people to be able to look at the >> list of articles at /articles based on topic. There was I think a >> suggestion on this list to put the list of articles that are just >> about Java at: >> >> /articles/java > > Or you could use "/topics/java", which makes more sense to me. > Trouble with that is it isn't specific to articles. We also have other categories, such as news. And we want a specific URL for letting people browse Java news and a different one for Java articles. Also, I want to have something useful at each subpath. I could put a list of topics at: /topics but that's not really deserving of its own page. Also, I want to offer these lists sorted alphabetically and by reverse pub date. So going in this direction I'd need to do something like: /alpha/topics/java or /topics/alpha/java /topics/pubdate/java And what would you put at /alpha Also, when you have articles about Java listed at /articles?t=java, people can kind of get the idea from the URl that this is a list of articles rather than a list of news items. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com > K. > > > ------------------------ Yahoo! Groups Sponsor -------------------- > ~--> > Something is new at Yahoo! Groups. Check out the enhanced email > design. > http://us.click.yahoo.com/kOt0.A/gOaOAA/yQLSAA/W6uqlB/TM > -------------------------------------------------------------------- > ~-> > > > Yahoo! Groups Links > > > >
Bill Venners wrote: > >> would be articles. But I also want people to be able to > look at the > >> list of articles at /articles based on topic. There was I think a > >> suggestion on this list to put the list of articles that are just > >> about Java at: > >> > >> /articles/java > > > > Or you could use "/topics/java", which makes more sense to me. > > > Trouble with that is it isn't specific to articles. We also > have other categories, such as news. And we want a specific > URL for letting people browse Java news and a different one > for Java articles. /articles/ /topics/ /news/ > Also, I want to have something useful at > each subpath. I could put a list of topics at: > > /topics > > but that's not really deserving of its own page. Why not? It should list the topics available to be covered. > Also, I want to offer these lists sorted alphabetically and by > reverse pub date. So going in this direction I'd need to do > something > like: > > /alpha/topics/java > > or > > /topics/alpha/java > /topics/pubdate/java Sorting is not an entity, it is a layout/formatting directive. IOW, it doesn't change the content, only it's presentation. So in a path is should be at the end, not the beginning, or in a param: /topics/java/alpha /topics/java?sort=alpha I'd probably prefer the latter. > And what would you put at > > /alpha Nothing. > when you have articles about Java listed at /articles?t=java, > people can kind of get the idea from the URl that this is a list of > articles rather than a list of news items. But they can get the same idea if you segment as: /articles/ /news/ -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
: Don't get me wrong; I'm not saying there can be only one (unless you are an : immortal, but I digress...) What I am saying is that a bunch of : implementations whose difference are based purely on arbitrary happenstance : is nothing more than ego-gratification for the developers and not at all : good for evolving interoperable solutions. Mike, I believe this list is an appropriate place for brainstorming all sorts of REST ideas, including implementations, and I think your concern is misplaced. Maybe it would be different if this were a working group mail list, but it's "Rest discuss". Live and let live, please. Walden
Walden Mathews wrote: > : Don't get me wrong; I'm not saying there can be only one > (unless you are an > : immortal, but I digress...) What I am saying is that a bunch of > : implementations whose difference are based purely on > arbitrary happenstance > : is nothing more than ego-gratification for the developers > and not at all > : good for evolving interoperable solutions. > > Mike, I believe this list is an appropriate place for > brainstorming all sorts of REST ideas, including > implementations, and I think your concern is misplaced. > Maybe it would be different if this were a working group mail > list, but it's "Rest discuss". I was "discuss"ing my architectural concerns, nothing more. > Live and let live, please. Ditto. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
> So RESTEasy brings the count to 4, which is far from a "significant number of arbitrarily different implementations." Actually the operative phrase was "arbitrarily different" not "significant number." > There are far more Java Web Frameworks out there that > haven't stifled interoperability. At the end of the day, a > user is presented with a web page. It's still a duck. I beg to differ. The Web Frameworks landscape is filled with landmines... > Sure, it may just be ego-gratification, but it's also > about sharing ideas too. Unfortunately, this is > apparently something you seem to disagree with. I have no problem with sharing ideas. But just as Roy Fielding does not believe it is a good thing for a GET to change state, I do not believe it is a good thing to have lots of libraries and frameworks offered with what amounts to arbitrarily differences. I think it should be a best practice to have *consideration* for prior art and not to duplicate prior art if there are no obvious benefits. I think the unnecessary fragmentation of libraries and frameworks holds back progress. There's nothing wrong with coding to learn and share ideas. But when YAF (Yet Another Framework) gets promoted for no obvious benefit with no interoperability with others, that's when I see harm occuring. As it is the nature of the programmer to reinvent the wheel, I think there needs to be people who make this point as a counterbalance, i.e. to say "You really should try not to do this unless it is really needed." Please note I've made no judgement on whether your project's differences are really needed as I don't have the expertise to judge the specifics. I'm just calling the question. One of my many planned projects is to write a book on this topic. Or at least a thesis. :) > I folks > like Gavin King didn't gratify their egos, there wouldn't > be Hibernate or EJB3 in its current form. Its the > communication and presentation of new ideas that lead up > to a pretty decent API. And even though there is a Java > Persistence standard, you there are still other options > out there. Nothing wrong with sharing ideas, its divergent implementations that start to cause the problem. On the flip side, is there anything wrong with discussing your ideas with people who have similar projects so as to attempt to find common ground for interoperability? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..." P.S. I would hope that you'd listen my concerns rather than get defensive, the latter of which doesn't do either of us any good.
--- In rest-discuss@yahoogroups.com, "Mike Schinkel" <mikeschinkel@...> wrote: > How is this an improvement over RESTlet[1]? I wouldn't call RESTEasy an improvement over RESTlet, I think RESTlet is an excellent piece of work. RESTEasy takes a different in approach in how a RESTful web service is declared. RESTEasy is annotation driven framework that makes use of JAXB and has strong integration for EJB3 Entity beans and Hibernate. RESTlet is takes more of a traditional approach and is optimized to use Java NIO. > Does this mean we're going to see lots of incompatible REST architecture implementations? A RESTlet client should be able to interact with RESTEasy just fine when RESTEasy is complete. Conversely, RESTEasy should be able to consume RESTlet services. How the services are defined will be different, yes. However, both RESTEasy and RESTlet are thin enough that swapping out one service implementation for another should be painless. At this point, RESTEasy is just a series of annotations on an EJB. RESTlet, from my basic understanding of it, can be implemented as a delegate to an SLSB or some other class. As for the WSGI argument, there's only about 3 other REST frameworks for Java right now, so I'm not concerned that the segment is getting crowded. (BTW: sorry for email you directly Mike, I didn't realize that the responses go directly to the sender rather than the group. My bad :( ) Ryan-
You seem to site these "arbitrary differences" but yet have not been able to offer any specifics one what these "arbitrary differences" are. Out of curiosity, did you by any chance even read the pages for RESTEasy and then compare that with RESTlet? I get the feeling you only read the subject lines and went into a panic? You also site interoperability issues, but again offer no details on what those interoperability issues might be. But yeah, it's probably true that I could have reached out to the RESTlet folks and seen what type of collaboration could take place, if any. And prehaps that opportunity could still be there, but now there is something I could point them at to see if we're all on the same page. Plus, people may have a look at it an say that it's the lamest attempt ever. Either way, it's out there for people to see an review. Additionally, I am in the process of trying to join JSR-311 if my employer would get off their arses and sign Exhibit B ;) Ryan- --- In rest-discuss@yahoogroups.com, "Mike Schinkel" <mikeschinkel@...> wrote: > > > So RESTEasy brings the count to 4, which is far from a > "significant number of arbitrarily different implementations." > > Actually the operative phrase was "arbitrarily different" not "significant > number." > > > There are far more Java Web Frameworks out there that > > haven't stifled interoperability. At the end of the day, a > > user is presented with a web page. It's still a duck. > > I beg to differ. The Web Frameworks landscape is filled with landmines... > > > Sure, it may just be ego-gratification, but it's also > > about sharing ideas too. Unfortunately, this is > > apparently something you seem to disagree with. > > I have no problem with sharing ideas. But just as Roy Fielding does not > believe it is a good thing for a GET to change state, I do not believe it is > a good thing to have lots of libraries and frameworks offered with what > amounts to arbitrarily differences. I think it should be a best practice to > have *consideration* for prior art and not to duplicate prior art if there > are no obvious benefits. I think the unnecessary fragmentation of libraries > and frameworks holds back progress. > > There's nothing wrong with coding to learn and share ideas. But when YAF > (Yet Another Framework) gets promoted for no obvious benefit with no > interoperability with others, that's when I see harm occuring. As it is the > nature of the programmer to reinvent the wheel, I think there needs to be > people who make this point as a counterbalance, i.e. to say "You really > should try not to do this unless it is really needed." Please note I've > made no judgement on whether your project's differences are really needed as > I don't have the expertise to judge the specifics. I'm just calling the > question. > > One of my many planned projects is to write a book on this topic. Or at > least a thesis. :) > > > I folks > > like Gavin King didn't gratify their egos, there wouldn't > > be Hibernate or EJB3 in its current form. Its the > > communication and presentation of new ideas that lead up > > to a pretty decent API. And even though there is a Java > > Persistence standard, you there are still other options > > out there. > > Nothing wrong with sharing ideas, its divergent implementations that start > to cause the problem. > > On the flip side, is there anything wrong with discussing your ideas with > people who have similar projects so as to attempt to find common ground for > interoperability? > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org - http://t.oolicio.us > "It never ceases to amaze how many people will proactively debate away > attempts to improve the web..." > > P.S. I would hope that you'd listen my concerns rather than get defensive, > the latter of which doesn't do either of us any good. >
"Mike Schinkel" <mikeschinkel@...> writes: > I have no problem with sharing ideas. But just as Roy Fielding does not > believe it is a good thing for a GET to change state, I do not believe it is > a good thing to have lots of libraries and frameworks offered with what > amounts to arbitrarily differences. I think it should be a best practice to > have *consideration* for prior art and not to duplicate prior art if there > are no obvious benefits. I think the unnecessary fragmentation of libraries > and frameworks holds back progress. The trouble is, we're in the early days of REST frameworks. You expect to see some proliferation before consolidation. The market (in it's abstract sense rather than it's specific monetary form) will sort it out. -- Nic Ferrier Need a linux/java/python hacker? I'm in need of work!
I'm having trouble understanding the interoperability issue. This is HTTP we're talking about here. How many different web servers do we have nowadays? App servers? Servlet containers? Maybe I haven't been paying attention, but I didn't realize there were any major interoperability issues between them - my browser seems to work with just about all of them with no problems. Why would having multiple REST frameworks cause interoperability issues? It's not like the HTTP coming in and out of a RESTful framework is going to be any different than the HTTP going in and out of a servlet engine - the headers and verbs have the same meaning regardless of what generates them. I say bring on the frameworks. Each will have its own approach for facilitating RESTful development - some will use annotations, some will let you use URI-templating, others will make it easy to map database records to resources; I'll pick the one I use based on the needs of my current project. If it turns out that the framework I'm using doesnt suit my needs, at least I'll have some options. --Chuck On 2/26/07, Mike Schinkel <mikeschinkel@...> wrote: > Elliotte Harold wrote: > > Mike Schinkel wrote: > > > > > How is this an improvement over RESTlet[1]? Does this mean > > we're going > > > to see lots of incompatible REST architecture implementations? > > > > I certainly hope so. Only by trying a lot of different things > > can we see what works. > > Variety for variety's sake is NOT good, IMO. That's why I asked what was > the difference. > > > Furthermore, I see no particular reason to standardize on the server. > > The interface between clients and servers is standard, but > > not the interfaces by which servers communicate on their own system. > > Have you ever looked at Python's WSGI? > > > If we bless one true architecture for serving REST, we will > > ultimately harm interoperability since we'll almost certainly > > bake assumptions into our code that are only true of that one > > local framework. > > I don't see how a significant number of arbitrarily different > implementations can ever help interoperability. I can only see how it can > harm it. > > Don't get me wrong; I'm not saying there can be only one (unless you are an > immortal, but I digress...) What I am saying is that a bunch of > implementations whose difference are based purely on arbitrary happenstance > is nothing more than ego-gratification for the developers and not at all > good for evolving interoperable solutions. > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org - http://t.oolicio.us > "It never ceases to amaze how many people will proactively debate away > attempts to improve the web..." > > > > > > > > Yahoo! Groups Links > > > >
> You seem to site these "arbitrary differences" but yet > have not been able to offer any specifics one what these > "arbitrary differences" are. You did hear what I was saying because (it seems) you are taking a defensive posture. There's no reason for that. Instead I stated that it would concern me it there were just "arbitrary differences" and asked it there were "arbitrary differences" that existed. I didn't accuse you of having any per se, and I thought I clarified that last email. BTW, I've been concerned about these issues far before your email; your email just happened to come along at the right (wrong?) time... > Out of curiosity, did you by any chance even read the > pages for RESTEasy and then compare that with RESTlet? I > get the feeling you only read the subject lines and went > into a panic? Panic is far to harsh a word. I brought up a potential issue and asked about it. And no I haven't looked at your systems because I'm not well versed in Java. But I do want to see REST become a success. > You also site interoperability issues, but again offer no > details on what those interoperability issues might be. I'm not an expert in RESTlet, nor clearly RESTEasy, so I can't know specific issues. But I know a lot about past systems where components are not interoperable and the problems that causes. Just look at the fragmentation that Python WSGI attempts to solve (have you looked at WSGI?) > But yeah, it's probably true that I could have reached > out to the RESTlet folks and seen what type of > collaboration could take place, if any. And prehaps that > opportunity could still be there, I would highly recommend it! In large part that was the reason for you comments. > but now there is > something I could point them at to see if we're all on > the same page. Plus, people may have a look at it an say > that it's the lamest attempt ever. Either way, it's out > there for people to see an review. One of the reasons why I made my comment was because of the title of the thread "New REST framework for Java", the fact it has a marketing-friendly name, it's own subdomain, and a website with Tutorials and such implied to me you are promoting a "new REST framework for Java" as an alternative to other solutions. It came across as far more than a thought exercise and instead something you were hoping to promote to see significant adoption. Had the email said "I'm trying to see if I get this REST thing right. I've written some code and would appreciate people looking at it to tell me I'm on the right track." I wouldn't have said a word. However, IMO, when you introduce a framework and you promote it to gain users, you have implicitly taken on much responsibility, and you can have a negative effect on the microcosm of the world if you create Y.A.F. without having good justification for doing so. To illustrate, let's discuss a hypothetical single framework that then has two competing implementations that add no real value, they are just different to be different. Before, someone would just choose the one. Afterwards, people need to invest significant time into evaluating the different frameworks before choosing. And instead of everyone contributing to one, now you have contributions divided among three. Different training is required to be developed. Finding developers with experience in a specific one is harder. Etc. etc. Essentially you limit the network effects, and network effects can provide huge value. Actually neither situation is ideal. What's better, IMO is to work against shared interfaces when possible, and implement divergent enhancements off those shared interfaces. Without doing that, you end up with solutions that just cannot be had because of an either/or situation. One reason Microsoft has such success is there is far less fragmentation that occurs on the Windows platform. People look to Microsoft to set the standards. In other areas fragmentation rationalized by "programmer choice" balkanizes the development landscape and reduces critical mass for any one solution. BTW, there are many other different issues I have with Microsoft's approach which I've blogged about a lot lately [1], so don't view me as a Microsoft proponent, I just recognize value when I see it. BTW, I'm not saying that nobody should build and promote multiple solutions, just that DRTWWNN (Don't reinvent the wheel when not necessary), BOSIWP (Build on shared Interfaces whenever possible), and ASFI (Always Strive for Interoperability) be considered right up there with DRY (Don't repeat yourself) and other best practices in software engineering. > Additionally, I am in > the process of trying to join JSR-311 if my employer > would get off their arses and sign Exhibit B ;) Good deal! -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..." [1] http://www.mikeschinkel.com/blog/category/Microsoft P.S. As an aside, it is ironic for me that I would get pushback in a REST forum on the issue of constraining interfaces when one of Roy Fieldings main justifications for the benefits of REST is that there are four (4) well known and universally implemented verbs, and for all practical purposes, NO MORE. That is one of the reasons I so believe in REST, and it is also one of the reasons I think lots of different implementations are in general a Bad Thing(tm). FWIW.
Nic James Ferrier wrote: > "Mike Schinkel" <mikeschinkel@...>writes: > > > I have no problem with sharing ideas. But just as Roy > > Fielding does not believe it is a good thing for a GET > > to change state, I do not believe it is a good thing to > > have lots of libraries and frameworks offered with what > > amounts to arbitrarily differences. I think it should > > be a best practice to have *consideration* for prior > > art and not to duplicate prior art if there are no > > obvious benefits. I think the unnecessary > > fragmentation of libraries and frameworks holds back > > progress. > > > The trouble is, we're in the early days of REST > frameworks. You expect to see some proliferation before > consolidation. > > The market (in it's abstract sense rather than it's > specific monetary form) will sort it out. You are probably completely right. It's just I've been through so damn many of these cycles and seen so many errors already that I'm hoping we can follow Otto von Bismarck's lead, not George W. Bush's. :) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
Many of the cycles that you (and we) have been through have indeed made promises about interoperability, but failed to deliver. However, the REST frameworks that are starting up are /not/ about interoperability between remote software components. You can see that thru the summary descriptions of RESTEasy and Restlets. The designers of these frameworks are taking for granted that integration-thru-protocol has been mostly achieved via HTTP. They are attempting to simplify - not standardize - the building of HTTP based applications using the design guidelines of REST. -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Mike Schinkel Sent: Monday, February 26, 2007 10:18 PM To: 'Nic James Ferrier' Cc: 'damnhandy2000'; rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] RE: New REST framework for Java Nic James Ferrier wrote: > "Mike Schinkel" <mikeschinkel@...>writes: > > > I have no problem with sharing ideas. But just as Roy Fielding does > > not believe it is a good thing for a GET to change state, I do not > > believe it is a good thing to have lots of libraries and frameworks > > offered with what amounts to arbitrarily differences. I think it > > should be a best practice to have *consideration* for prior art and > > not to duplicate prior art if there are no obvious benefits. I > > think the unnecessary fragmentation of libraries and frameworks > > holds back progress. > > > The trouble is, we're in the early days of REST frameworks. You expect > to see some proliferation before consolidation. > > The market (in it's abstract sense rather than it's specific monetary > form) will sort it out. You are probably completely right. It's just I've been through so damn many of these cycles and seen so many errors already that I'm hoping we can follow Otto von Bismarck's lead, not George W. Bush's. :) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..." Yahoo! Groups Links
Chuck Hinson wrote: > I'm having trouble understanding the interoperability issue. > This is HTTP we're talking about here. How many different > web servers do we have nowadays? App servers? Servlet > containers? Maybe I haven't been paying attention, but I > didn't realize there were any major interoperability issues > between them I'm talking about the implementation level on server, not at the HTTP level. I've been studying Python lately, so I'll use it for an example. If I use Django, I can't use SQLObject or SQLAlchemy. Or Kid or Cheetah or Myghty for templates. Etc. So I have to use Django, or I get to have access to a mix-and-match approach. But I can't do both. And I have to spend an incredible amount of time evaluating the differences and trying to decide which solution is going to meet my needs and not box me in. > I say bring on the frameworks. Each will have its own > approach for facilitating RESTful development - some will use > annotations, some will let you use URI-templating, others > will make it easy to map database records to resources; I'll > pick the one I use based on the needs of my current project. And what I'm asking is that people don't needlessly proliferate without haven't some clear and obvious added value, and if possible find and use some common interfeces with existing frameworks. > If it turns out that the framework I'm using doesnt suit my > needs, at least I'll have some options. And if people heed my concerns, you'll have a far easier time switching. If they ignore my concerns, your momentum on one framewok may well make it impossible for you to switch. Said another way: Many of us lived through the days of proprietary lockin by vendors; why would you champion proprietary lockin by open-source projects? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
On Feb 27, 2007, at 2:42 AM, Mike Schinkel wrote: > Elliotte Harold wrote: > > Mike Schinkel wrote: > > > > > How is this an improvement over RESTlet[1]? Does this mean > > we're going > > > to see lots of incompatible REST architecture implementations? > > > > I certainly hope so. Only by trying a lot of different things > > can we see what works. > > Variety for variety's sake is NOT good, IMO. That's why I asked > what was > the difference. > That's like asking you to keep your opinion to yourself, because there are enough here already. > > Furthermore, I see no particular reason to standardize on the > server. > > The interface between clients and servers is standard, but > > not the interfaces by which servers communicate on their own system. > > Have you ever looked at Python's WSGI? > > > If we bless one true architecture for serving REST, we will > > ultimately harm interoperability since we'll almost certainly > > bake assumptions into our code that are only true of that one > > local framework. > > I don't see how a significant number of arbitrarily different > implementations can ever help interoperability. I can only see how > it can > harm it. You might want to consider the difference between interoperability and portability. If a "RESTEasy" application can't interoperate with a JSR311-based one, the reason is that they aren't (or can't be) used according to REST principles. I'm certainly not defending this particular framework, but your questioning its value without even looking at the approach it takes (and without knowing much about the Java space, as you explain later) seems utterly pointless. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Mike Dierken wrote > They are attempting to simplify - not standardize > - the building of HTTP based applications using the design > guidelines of REST. And that's what I was hoping would not be the case. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
Hi Mike, On Feb 26, 2007, at 5:58 PM, Mike Schinkel wrote: > Bill Venners wrote: >>>> would be articles. But I also want people to be able to >> look at the >>>> list of articles at /articles based on topic. There was I think a >>>> suggestion on this list to put the list of articles that are just >>>> about Java at: >>>> >>>> /articles/java >>> >>> Or you could use "/topics/java", which makes more sense to me. >>> >> Trouble with that is it isn't specific to articles. We also >> have other categories, such as news. And we want a specific >> URL for letting people browse Java news and a different one >> for Java articles. > > /articles/ > /topics/ > /news/ > My requirements are that articles about Java and news about Java need two different URLs. /topics/java is one URL. At /topics/java I'd expect to see any kind of content about Java, but that is not a requirement. >> Also, I want to have something useful at >> each subpath. I could put a list of topics at: >> >> /topics >> >> but that's not really deserving of its own page. > > Why not? It should list the topics available to be covered. > Actually it would probably be tags, or just a search box. I lost confidence in listing topics a priori that content would fall into. Sure, you could have such a page, but if I weren't forced to do it by the information architecture I wouldn't give this a page. I don't have one now and don't plan to have one in the future. >> Also, I want to offer these lists sorted alphabetically and by >> reverse pub date. So going in this direction I'd need to do >> something >> like: >> >> /alpha/topics/java >> >> or >> >> /topics/alpha/java >> /topics/pubdate/java > > Sorting is not an entity, it is a layout/formatting directive. IOW, it > doesn't change the content, only it's presentation. So in a path is > should > be at the end, not the beginning, or in a param: > > /topics/java/alpha > /topics/java?sort=alpha > > I'd probably prefer the latter. > Well that was the point of my response. Your original question was why does one need query parameters. >> And what would you put at >> >> /alpha > > Nothing. Which was also my point. >> when you have articles about Java listed at /articles?t=java, >> people can kind of get the idea from the URl that this is a list of >> articles rather than a list of news items. > > But they can get the same idea if you segment as: > > /articles/ > /news/ > I'm not sure if you mean /topics/articles here or /articles. If you say /topics/articles I think it is pretty clear. But I still feel / topics itself is artificial. /articles is all articles. /news is all news. If you want to look at news about Java and articles about Java, it is at: /articles?t=java /news?t=java The query parameters yield a "view" of part of the concept to the left of the question mark, in this case, the subset of content items that fall under the "Java" category. What I'm claiming is that yes, you potentially could try to force everything into path fields (between the slashes), but named parameters are often a better fit. You can leave a named query params completely out to say it is at its default value, whereas using a path field for everything would require you to actually put a default character in the URL, because you need something in each position (unless what you want to leave out is at the end). Also, I think it would be easier to add new parameters later than new path fields, because when you try and put something that fits better as a query into the path, it needs to go in the beginning. So if you didn't think of it initially, you can't put it in without breaking existing links, because path params are parsed by position, not by name. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org - http://t.oolicio.us > "It never ceases to amaze how many people will proactively debate away > attempts to improve the web..." > > > > > > ------------------------ Yahoo! Groups Sponsor -------------------- > ~--> > Yahoo! Groups gets a make over. See the new email design. > http://us.click.yahoo.com/hOt0.A/lOaOAA/yQLSAA/W6uqlB/TM > -------------------------------------------------------------------- > ~-> > > > Yahoo! Groups Links > > > >
Well, HTTP is already standardized. My opinion is that trying to standardize on /implementations/ will result in the neverending cycles that you mentioned earlier. Maybe it's time for a change... -----Original Message----- From: Mike Schinkel [mailto:mikeschinkel@...] Sent: Monday, February 26, 2007 10:44 PM To: 'Mike Dierken' Cc: rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] RE: New REST framework for Java Mike Dierken wrote > They are attempting to simplify - not standardize > - the building of HTTP based applications using the design guidelines > of REST. And that's what I was hoping would not be the case. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
Hi Ryan and all, Thanks for starting this new effort. I think this is positive to see new RESTful frameworks in the Java world, as long as they bring in new ideas or better designs. This is a sign of vitality and it should probably only help to broaden the adoption of REST ideas. I don't see any interoperability issue as long as we all agree on using HTTP 1.1, standard URIs, and don't force on users specific formats/media types for resource representations. Concerning portability from one framework to another, there is a risk of lock-in and also a need for experimentation. This is why we all need to support standard APIs like JSR-311 (Annotations for RESTful Web Services). The Restlet framework [1] has a separation between the Restlet API and the reference implementation (Noelios Restlet Engine). This means that we are encouraging alternative implementations of the API, that I would like to submit to the JCP in 2008 or later this year. The approach of JSR-311 should be complimentary to the Restlet approach [2]. The JSR-311 cites some potential supporting technologies like Servlets and JAX-WS. The Restlet API will also be a very natural foundation. There could be an opportunity for integration between RESTEasy and Restlets based on JSR-311. I'm also planning to experiment with annotations during the work of the JSR-311 expert group and to add support for them in a later release of Restlets. For now, we are focusing on releasing the final 1.0 version, with a new Web site and a professional support. All the best, Jerome [1] http://www.restlet.org [2] http://blog.noelios.com/2007/02/14/
Stefan Tilkov wrote: > > > How is this an improvement over RESTlet[1]? Does this > > > mean we're going to see lots of incompatible REST > > > architecture implementations? > > > > > I certainly hope so. Only by trying a lot of different > > things can we see what works. > > > Variety for variety's sake is NOT good, IMO. That's why I > asked what was the difference. > > That's like asking you to keep your opinion to yourself, > because there are enough here already. I honestly don't see the analogy. > I'm certainly not defending this particular framework, > but your questioning its value Would someone actually READ what I have been writing instead of go into emotive attack mode? I did NOT question it's value. I asked IF it added value over RESTlet. I did NOT say that it didn't, only that IF it didn't, that would NOT be a good thing. And if it DID, then utilizing shared interfaces WOULD be a good thing. If you are going to attack me comments, *please* at least attack what I said not what you felt about what I said. > without even looking at the approach it takes (and > without knowing much about the Java space, as you explain > later) seems utterly pointless. Are you saying lessons learned using one set of tools are completely and totally irrelevant on other tools? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
On 2/27/07, Bill Venners <bv-svp@...> wrote: > /articles?t=java > /news?t=java /articles/java /news/java which each resolve to an item ; /article/1234 /news-item/2335 Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
Bill Venners wrote: > > > > Or you could use "/topics/java", which makes more > > > > sense to me. > > > > > > > Trouble with that is it isn't specific to articles. > > > We also have other categories, such as news. And we > > > want a specific URL for letting people browse Java > > > news and a different one for Java articles. > > > > > /articles/ > > /topics/ > > /news/ > > > My requirements are that articles about Java and news > about Java need two different URLs. /topics/java is one > URL. At /topics/java I'd expect to see any kind of > content about Java, but that is not a requirement. I was assuming /topics/ were categorizations of articles and news, but articles and news themselves. > > > Also, I want to have something useful at each > > > subpath. I could put a list of topics at: > > > > > > /topics > > > > > > but that's not really deserving of its own page. > > > > > Why not? It should list the topics available to be > > covered. > > > Actually it would probably be tags, or just a search box. > I lost confidence in listing topics a priori that content > would fall into. I concur with that, but you can still have a URL with tags like Technorati or Flickr, no? > Sure, you could have such a page, but if I weren't forced > to do it by the information architecture I wouldn't give > this a page. I don't have one now and don't plan to have > one in the future. What harm would there be to offer such a page? Wouldn't it actually be a benefit to users? And it would definitely be a benefit in increasing Google PageRank if used wisely. > > > Also, I want to offer these lists sorted > > > alphabetically and by reverse pub date. So going in > > > this direction I'd need to do something like: > > > > > > /alpha/topics/java > > > > > > or > > > > > > /topics/alpha/java /topics/pubdate/java > > > > > Sorting is not an entity, it is a layout/formatting > > directive. IOW, it doesn't change the content, only > > it's presentation. So in a path is should be at the > > end, not the beginning, or in a param: > > > > /topics/java/alpha /topics/java?sort=alpha > > > > I'd probably prefer the latter. > > > Well that was the point of my response. Your original > question was why does one need query parameters? I wasn't asking as question with a goal of asserting a position, I was asking an honest question just like I asked honestly in my URLQuiz about people's positions on the .WWW subdomain [1]. > > > when you have articles about Java listed at > > > /articles?t=java, people can kind of get the idea > > > from the URl that this is a list of articles rather > > > than a list of news items. > > > > > But they can get the same idea if you segment as: > > > > /articles/ > > /news/ > > > I'm not sure if you mean /topics/articles here or > /articles. /articles/ > If you say /topics/articles I think it is > pretty clear. But I still feel / topics itself is > artificial. I only picked it because it was mentioned in the thread. I have no affinity to it. /articles for articles and /news for news works for me. > If you want to look at news about Java and articles about > Java, it is at: > > /articles?t=java /news?t=java > The query parameters yield a "view" of part of the > concept to the left of the question mark, in this case, > the subset of content items that fall under the "Java" > category. OTOH, I would far prefer to see: /articles/java/ /news/java/ As well as: /java/articles/ /java/news/ (But then we have been down that path on this forum with far more conversation participants than just you and me, haven't we? :) > What I'm claiming is that yes, you potentially could try > to force everything into path fields (between the > slashes), but named parameters are often a better fit. Agreed in general, but given the above example not agreed in specifics. > You can leave a named query params completely out to say > it is at its default value, whereas using a path field > for everything would require you to actually put a > default character in the URL, because you need something > in each position (unless what you want to leave out is at > the end). Can you give me an example where this is a problem? I'd like to see how I would address it (if you have given such an example to date, sorry, I missed it.) > Also, I think it would be easier to add new > parameters later than new path fields, because when you > try and put something that fits better as a query into > the path, it needs to go in the beginning. So if you > didn't think of it initially, you can't put it in without > breaking existing links, because path params are parsed > by position, not by name. True, which is why I like to look at the nature of the IA when deciding on a URL structure. Some informatoin is much more resilient than others. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..." [1] http://blog.welldesignedurls.org/2007/02/19/urlquiz-1-www-or-non-www/
Jrme Louvel wrote: > Thanks for starting this new effort. I think this is positive > to see new RESTful frameworks in the Java world, as long as > they bring in new ideas or better designs. Your points address the concerns I voices. Thanks! -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
damnhandy2000 wrote: > I have been messing around with a framework idea for Java for the > past 4 months and finally > have been able to make it publicly available. It's called RESTEasy, > and you can read more > about it here: > > http://resteasy.damnhandy.com/ > > At present, it's very much a work in progress and has many rough > edges and typos. It utilizes > JAXB2 quite heavily and integrates very nicely with JPA and > Hibernate 3. It should be noted > that JAXB will not be a requirment. Curious to hear anyones thoughts. Unclear as to how representations are generated. Where does the content type get selected? How do you do content/language negotiation? You confusingly talk about contactId being a URLParam when it is a part of the URL path and not a parameter. How do you do orthogonal request processing (authentication, authorisation, transformation)? How do you map different parts of the namespace to the same component? What, for instance, do GETs to the following resources result in: http://localhost/resteasy/contacts/12345/something http://localhost/resteasy/contacts/12345?myparam=whatever Current REST thinking is that you update a contact by using the PUT method, why do you use POST in your example? You don't give an example for creating a new contact or deleting an existing contact. You don't give an example for searching for contacts. Generally, in terms of a REST hello-world example, it seems incomplete if not incorrect. There's a lot of sludgy annotation markup the purpose of which is unclear and possibly misleading.
Hi Chris,
Thanks for your questions, I've addressed some of you question below:
On Feb 27, 2007, at 4:42 AM, Chris Burdess wrote:
>
>
> Unclear as to how representations are generated. Where does the
> content type get selected? How do you do content/language negotiation?
RESTEasy uses a series of RepresenttionHandlers which take of that.
The RepresenttionHandler is selected by mediaType attribute on the
Response or Representation annotations. It defaults to application/
xml and the JAXBRepresenttionHandler is used by default.
>
> You confusingly talk about contactId being a URLParam when it is a
> part of the URL path and not a parameter.
Yep, typo there.
>
> How do you do orthogonal request processing (authentication,
> authorisation, transformation)?
The framework was designed to work within a servlet container so that
it can utilize the authorization and authentication mechanisms of the
application server. My intent has never been to be able to run
outside of a Java 5 EE app server. Not sure what you mean by
transformation.
>
> How do you map different parts of the namespace to the same component?
>
> What, for instance, do GETs to the following resources result in:
>
> http://localhost/resteasy/contacts/12345/something
You could get an XML document that represents "something" that is a
member of the contact 12345. It could map to the following Java Method:
@HttpMethod(GET)
public Something getSomethingFromContact(URIParam("contactId") Long id);
> http://localhost/resteasy/contacts/12345?myparam=whatever
You could use "myparam" as a modifier:
@HttpMethod(GET)
@Response(mediaType="application/xml")
public Contact getContact(@URIParam("contactId") Long id, @QueryParam
("myparam") String myparam);
This value could instruct the service to return a Contact instance
that includes additional details, etc.
>
> Current REST thinking is that you update a contact by using the PUT
> method, why do you use POST in your example?
There are varying opinions on that, and some of the examples put
forth by Sun in in their JAX-WS REST examples would have you believe
that updates should be done via POST:
http://java.sun.com/developer/technicalArticles/WebServices/restful/
I've read plenty of other article touting the same sort of thing.
Another reason I'm using POST is that not all HTTP clients can
support PUT or DELETE operations. My driver for developing RESTEasy
was to create a framework that could be both truly RESTful (I am
aware that it's not quite there yet) but can also be compatible with
lesser HTTP clients. RESTEasy grew out of trying to create a service
that could plug into Adobe Flex or OpenLaszlo. The Flash player
doesn't support PUT or DELETE, nor does Apple's Safari. RESTEasy can
use POST + discriminator to accomodate these clients.
> You don't give an example for creating a new contact or deleting an
> existing contact. You don't give an example for searching for
> contacts. Generally, in terms of a REST hello-world example, it
> seems incomplete if not incorrect.
You are correct about that, and better examples are forthcoming.
>
> There's a lot of sludgy annotation markup the purpose of which is
> unclear and possibly misleading.
You are correct that it does need better explanation.
Ryan-
Hi Alexander, On Feb 27, 2007, at 12:17 AM, Alexander Johannesen wrote: > On 2/27/07, Bill Venners <bv-svp@...> wrote: >> /articles?t=java >> /news?t=java > > /articles/java > /news/java > > which each resolve to an item ; > > /article/1234 > /news-item/2335 > I considered this option along the way as well, but the trouble with it is it breaks my desire to have something useful at each path segment. (Also, I want to have strings for article IDs in URLs, not numbers as in: /article/why_put_and_delete Anyway, if the user is at the above URL and chops off the / why_put_and_delete, they would get: /article And there's nothing sensible to put at that URL. If the URL is: /articles/why_put_and_delete Then when they chop, they get: /articles Which is a listing or way to browse through all articles. I considered just redirecting from /article to /articles, but I felt that was a bit jumpy for the user. It's not terrible, but neither is putting the topic in a query param instead of a path segment, and I chose the latter. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
Hi Mike, On Feb 27, 2007, at 12:39 AM, Mike Schinkel wrote: > Bill Venners wrote: >>>>> Or you could use "/topics/java", which makes more >>>>> sense to me. >>>>> >>>> Trouble with that is it isn't specific to articles. >>>> We also have other categories, such as news. And we >>>> want a specific URL for letting people browse Java >>>> news and a different one for Java articles. >>>> >>> /articles/ >>> /topics/ >>> /news/ >>> >> My requirements are that articles about Java and news >> about Java need two different URLs. /topics/java is one >> URL. At /topics/java I'd expect to see any kind of >> content about Java, but that is not a requirement. > > I was assuming /topics/ were categorizations of articles and news, but > articles and news themselves. > I think you left a "not" out, but I'm not sure where. However, what I'm after with topics is simply to show a listing of a subset of descriptions and links to articles (or news) about a certain topic. >>>> Also, I want to have something useful at each >>>> subpath. I could put a list of topics at: >>>> >>>> /topics >>>> >>>> but that's not really deserving of its own page. >>>> >>> Why not? It should list the topics available to be >>> covered. >>> >> Actually it would probably be tags, or just a search box. >> I lost confidence in listing topics a priori that content >> would fall into. > > I concur with that, but you can still have a URL with tags like > Technorati > or Flickr, no? > Yes, if you change topics to tags that makes more sense to me to have something under. /tags That would let you browse any content based on tag. But I might call this /explore or /browse. But tags aren't topics. >> Sure, you could have such a page, but if I weren't forced >> to do it by the information architecture I wouldn't give >> this a page. I don't have one now and don't plan to have >> one in the future. > > What harm would there be to offer such a page? Wouldn't it > actually be a > benefit to users? And it would definitely be a benefit in > increasing Google > PageRank if used wisely. > I'm not sure about page rank, but showing tags would make sense to me. Again these would list content of any kind at Artima all mooshed together: articles, news, syndicated RSS feeds, blogs, chapter PDFs, videos, audio, whatever. This does not solve the problem, however, of looking at one kind of content via a topic. Where I've been planning to go with topics is to replace what I have now with a search box in the left hand column, and the result would be a listing of articles (or news or whatever kind of content you're looking at a listing of) that fit the search query. Thus a /topics page would be a lone search box, and it just doesn't make sense to users that it would just search a subset of content. I think it should be called /search or something and search the whole site. And by the way, since they are entering a topic in a search box, it would come up as a get with a query parameter, hence: /articles?t=java Were I to want to put it here: /articles/java I'd need to have Javascript form the query when you click the submit button, and do a redirect on the server to catch cases with no Javascript. I also considered that (see my earlier email about a9 versus Google) and decided against it. But the other problem with /articles/java, which is independent of whether or not I use a search box, is that the namespace of topics collides with the namespace of articles. >>>> Also, I want to offer these lists sorted >>>> alphabetically and by reverse pub date. So going in >>>> this direction I'd need to do something like: >>>> >>>> /alpha/topics/java >>>> >>>> or >>>> >>>> /topics/alpha/java /topics/pubdate/java >>>> >>> Sorting is not an entity, it is a layout/formatting >>> directive. IOW, it doesn't change the content, only >>> it's presentation. So in a path is should be at the >>> end, not the beginning, or in a param: >>> >>> /topics/java/alpha /topics/java?sort=alpha >>> >>> I'd probably prefer the latter. >>> >> Well that was the point of my response. Your original >> question was why does one need query parameters? > > I wasn't asking as question with a goal of asserting a position, I was > asking an honest question just like I asked honestly in my URLQuiz > about > people's positions on the .WWW subdomain [1]. > I know. I thought you had perhaps forgotten the question since it took me so long to reply. >>>> when you have articles about Java listed at >>>> /articles?t=java, people can kind of get the idea >>>> from the URl that this is a list of articles rather >>>> than a list of news items. >>>> >>> But they can get the same idea if you segment as: >>> >>> /articles/ >>> /news/ >>> >> I'm not sure if you mean /topics/articles here or >> /articles. > > /articles/ > Oh you mean with a trailing slash? Yes, I drop the trailing slash because it is one less character. The only place where I plan to keep it is the home page: http://www.something.com/ >> If you say /topics/articles I think it is >> pretty clear. But I still feel / topics itself is >> artificial. > > I only picked it because it was mentioned in the thread. I have no > affinity > to it. > /articles for articles and /news for news works for me. > I'm not sure what you're getting at here. >> If you want to look at news about Java and articles about >> Java, it is at: >> >> /articles?t=java /news?t=java >> The query parameters yield a "view" of part of the >> concept to the left of the question mark, in this case, >> the subset of content items that fall under the "Java" >> category. > > OTOH, I would far prefer to see: > > /articles/java/ > /news/java/ > > As well as: > > /java/articles/ > /java/news/ > > (But then we have been down that path on this forum with far more > conversation participants than just you and me, haven't we? :) > Yes. I have a big problem with /java. Because you've now said that topics can be at the root. That doesn't leave much room for anything else, because the namespace of topics could collide with anything. At the root I want to have full potential in the future to add new kinds of things. So I wouldn't put anything with an unlimited namespace at the root. Amazon did this at a9, and that means they will never be able to put anything else at a9 besides search results, unless they want to break URLs. Maybe breaking URLs is not so bad at a search engine, but I would have put something like "search" in the path: http://a9.com/search/dog+cat rather than http://a9.com/dog+cat >> What I'm claiming is that yes, you potentially could try >> to force everything into path fields (between the >> slashes), but named parameters are often a better fit. > > Agreed in general, but given the above example not agreed in > specifics. > I think the problem here is that it is difficult to explain my requirements. Given my requirements, it doesn't make sense to put topics in a path segment. >> You can leave a named query params completely out to say >> it is at its default value, whereas using a path field >> for everything would require you to actually put a >> default character in the URL, because you need something >> in each position (unless what you want to leave out is at >> the end). > > Can you give me an example where this is a problem? I'd like to > see how I > would address it (if you have given such an example to date, sorry, > I missed > it.) > /alpha/topics/articles/java /articles?o=a&t=java These would show an alphabetical listing of articles about Java. The default would be by reverse publication date: /chrono/topics/articles/java /articles?t=java In the latter case, I just leave the o= query out. But in the former, I need to put something in the path segment because path segments are parsed by position. So I put "chrono". Bill
Assume one has multiple URIs to the same resource, say something like: weddings.com/2005/04/11/smith weddings.com/san-digeo/2005/04/smith If those are meant to designate the same single resource (and the first one is the "primary" URI for the resource), is a 301 "Moved Permanently" redirect the most appropriate solution? "Moved Permanently" seems to indicate that the second URI is somehow invalid or outdated, which isn't true in this case. Any insight into this issue would be appreciated. Thanks.
: And what I'm asking is that people don't needlessly proliferate without : haven't some clear and obvious added value, and if possible find and use : some common interfeces with existing frameworks. Now we're talking. Everyone please check with Mike from now on before needlessly proliferating on the list. Mike, what are "interfeces", if you don't mind? Are they on-topic? Walden :-)
Hi Walden, On Feb 27, 2007, at 7:06 PM, Walden Mathews wrote: > Mike, what are "interfeces", if you don't mind? Are they on-topic? > That's pretty funny. I have unfortunately seen a lot of interfeces over the years. I think there is something to what Mike is trying to say. I agree with what seems to be the majority here that the "froth" of multiple competing implementations is good in that it encourages innovation and enables multiple tools focused on different needs, but it can also confuse the marketplace. For example, Python has a bunch of web frameworks that compete. Ruby, by contrast, has Rails, which dominates so much that you don't hear about any other Ruby web framework. I think that actually helps promote Ruby over Python to some extent. So even though froth is good in general, there is a tradeoff. Bill ---- Bill Venners President Artima, Inc. http://www.artima.com
> Mike, what are "interfeces", if you don't mind? Are they on-topic? If you translate "Web Services" to Greek back again, it comes out "Inter Feces".
Brad Fults schrieb: > > > Assume one has multiple URIs to the same resource, say something like: > > weddings.com/ 2005/04/11/ smith > > weddings.com/ san-digeo/ 2005/04/smith > > If those are meant to designate the same single resource (and the > first one is the "primary" URI for the resource), is a 301 "Moved > Permanently" redirect the most appropriate solution? > > "Moved Permanently" seems to indicate that the second URI is somehow > invalid or outdated, which isn't true in this case. > > Any insight into this issue would be appreciated. If both identify the "same" resource and can be used interchangeably, I wouldn't expect a redirect at all. Just treat them the same way. Best regards, Julian
Brad Fults wrote: > Assume one has multiple URIs to the same resource, say something like: > > weddings.com/2005/04/11/smith > > weddings.com/san-digeo/2005/04/smith > > If those are meant to designate the same single resource (and the > first one is the "primary" URI for the resource), is a 301 "Moved > Permanently" redirect the most appropriate solution? > > "Moved Permanently" seems to indicate that the second URI is somehow > invalid or outdated, which isn't true in this case. if weddings.com/san-digeo/2005/04/smith returns a 301 to weddings.com/2005/04/11/smith this means: 1. A user-agent should perform the operation it was going to perform on weddings.com/san-digeo/2005/04/smith on weddings.com/2005/04/11/smith. 2. This will hold every single time an operation was going to be performed on weddings.com/san-digeo/2005/04/smith and therefore 2a. This 301 response can be cached indefinitely, since it's not going to change. 2b. Any record of the URI weddings.com/san-digeo/2005/04/smith can be replaced with a record of the URI weddings.com/2005/04/11/smith. Which matches your description completely. Since there's no built-in way to determine or reason about the previous states of resources, the difference between "Moved Permanently" because something did indeed move and between "Moved Permanently" because the URI was designed from the get-go to 301 is outside of what HTTP can do. 301 can't indicate an invalid URI, because an invalid URI can't be parsed by a server (a 400 error can indicate an invalid error, though it should have been caught by the client before then). Nor does it indicate an outdated URI, though in some cases the URI is no longer doing what it used to be doing, it's still doing something.
Julian Reschke wrote: > If both identify the "same" resource and can be used interchangeably, I > wouldn't expect a redirect at all. Just treat them the same way. That's wasteful though. Having only one URI respond to GET requests allows for better caching, and reasoning about the fact that URI A can always be used where URI B is used to be made by clients (the fact that Google does such reasoning can be a strong reason in itself as far as whoever pays the bills is concerned).
Ryan J. McDonough wrote:
>> What, for instance, do GETs to the following resources result in:
>>
>> http://localhost/resteasy/contacts/12345/something
>
> You could get an XML document that represents "something" that is a
> member of the contact 12345. It could map to the following Java Method:
>
> @HttpMethod(GET)
> public Something getSomethingFromContact(URIParam("contactId") Long id);
>
>> http://localhost/resteasy/contacts/12345?myparam=whatever
>
> You could use "myparam" as a modifier:
>
> @HttpMethod(GET)
> @Response(mediaType="application/xml")
> public Contact getContact(@URIParam("contactId") Long id, @QueryParam
> ("myparam") String myparam);
>
In the above examples I think we are on the same page. See Marc's blog
[1] for very similar ideas.
Paul.
[1]
http://weblogs.java.net/blog/mhadley/archive/2007/02/jsr_311_java_ap.html
--
| ? + ? = To question
----------------\
Paul Sandoz
x38109
+33-4-76188109
Hi Paul,
I actually did notice that a few days when the formation of the JSR
was announced. When I saw Marc's blog, I decided to share what I had
been working on as it is very similar. I blogged about it a bit here:
http://www.damnhandy.com/2007/02/21/resteasy-preview-a-restful-web-services-framework-for-java/
I had been trying to follow the WADL spec during much of my
development, which probably explains the similarities. Eventually, I
hope to be able to generate service-stubs from a WADL definition or
dynamically generate a WADL definition from the annotations.
Ryan-
On 2/28/07, Paul Sandoz <Paul.Sandoz@...> wrote:
>
>
>
>
>
>
> Ryan J. McDonough wrote:
> >> What, for instance, do GETs to the following resources result in:
> >>
> >> http://localhost/resteasy/contacts/12345/something
> >
> > You could get an XML document that represents "something" that is a
> > member of the contact 12345. It could map to the following Java Method:
> >
> > @HttpMethod(GET)
> > public Something getSomethingFromContact(URIParam("contactId") Long id);
> >
> >> http://localhost/resteasy/contacts/12345?myparam=whatever
> >
> > You could use "myparam" as a modifier:
> >
> > @HttpMethod(GET)
> > @Response(mediaType="application/xml")
> > public Contact getContact(@URIParam("contactId") Long id, @QueryParam
> > ("myparam") String myparam);
> >
>
> In the above examples I think we are on the same page. See Marc's blog
> [1] for very similar ideas.
>
> Paul.
>
> [1]
> http://weblogs.java.net/blog/mhadley/archive/2007/02/jsr_311_java_ap.html
>
> --
> | ? + ? = To question
> ----------------\
> Paul Sandoz
> x38109
> +33-4-76188109
>
--
Ryan J. McDonough
http://www.damnhandy.com
Hello Bill, Tradeoffs are good, and yours makes sense to me, except that rest-discuss is not a marketplace so much as it is a brainstorming space for folks of all kinds, at all levels, trying to "get it" more than they currently do. Offering up a new framework, to me, is just one way of doing that, and so is fair play and need answer to nobody's sense of marketplace, evolution or whatever. I say "bring it". Walden ----- Original Message ----- From: "Bill Venners" <bv-svp@...> To: "Walden Mathews" <waldenm@...> Cc: "Mike Schinkel" <mikeschinkel@...>; "'Chuck Hinson'" <chuck.hinson@...>; <rest-discuss@yahoogroups.com> Sent: Tuesday, February 27, 2007 10:42 PM Subject: Re: [rest-discuss] New REST framework for Java : Hi Walden, : : On Feb 27, 2007, at 7:06 PM, Walden Mathews wrote: : : > Mike, what are "interfeces", if you don't mind? Are they on-topic? : > : That's pretty funny. I have unfortunately seen a lot of interfeces : over the years. : : I think there is something to what Mike is trying to say. I agree : with what seems to be the majority here that the "froth" of multiple : competing implementations is good in that it encourages innovation : and enables multiple tools focused on different needs, but it can : also confuse the marketplace. For example, Python has a bunch of web : frameworks that compete. Ruby, by contrast, has Rails, which : dominates so much that you don't hear about any other Ruby web : framework. I think that actually helps promote Ruby over Python to : some extent. So even though froth is good in general, there is a : tradeoff. : : Bill : ---- : Bill Venners : President : Artima, Inc. : http://www.artima.com : : : : : __________ NOD32 2085 (20070228) Information __________ : : This message was checked by NOD32 antivirus system. : http://www.eset.com : :
I just posted a "URLQuiz" on my blog about "URL Equivalence and Cachability." If anyone wants to test their knowledge of some obscure cases related to URLs and encoding, the quiz is located at [1]. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org <http://www.welldesignedurls.org/> http://atlanta-web.org <http://atlanta-web.org/> - http://t.oolicio.us <http://t.oolicio.us/> "It never ceases to amaze how many people will proactively debate away attempts to improve the web..." [1] <http://blog.welldesignedurls.org/2007/03/01/urlquiz-2-url-equivalence-and-c achability/> http://blog.welldesignedurls.org/2007/03/01/urlquiz-2-url-equivalence-and-ca chability/
On Sun, 2007-02-25 at 21:53 +0100, Danny Ayers wrote: > Wow, what a thread. I'll respond at greater length once I've re-read a > couple of times and thought a bit... but there is one point I can pick > up on right away, from Benjamin: > [[ > I challenge the effectiveness of RDF on a number of points > * The effectiveness of the graph structure for conveying data machine > to > machine > ]] > The Web is a graph structure. That's fine in the abstract sense, but * An atom document has an atom structure * A html document has a html structure * A train list document has a train list structure These are the structures I really want to get at when I process information from another component in the network. If these are encoded directly in XML I can extract this information use tree-walking algorithms. If they are encoded into RDF I need a tool that does tree-walking to build up an RDF graph, then I need to do graph-walking to build the structure I really want to extract. Not only is the graph structure level unnecessary, it is more algorithmically complex than the tree walk. I suggest that it is at a fundamental level easier to write a feed reader that understands atom/xml than it is to write a feed reader tha understands atom/rdf, no matter how good your tools are for processing the underlying format or model. In either the case of RDF of XML, you still need to specialise your document type. In RDF you need vocabulary. In XML you need schema, which encompasses vocabulary and structure. I coneed that uniform structure is important when you want to throw data into a database and allow query over it. However, I would contend that this is not a common function in machine-to-machine interoperation. Most machine processing needs to do something specific with the data it receives, and for that we do need the higher-level vocabulary or schema to be well-defined. If it is a prerequisite of the machine-processable web to have fully self-describing documents, then we can always translate these to RDF for our storage needs if we really want to. In the mean-time, I would suggest that RDF complicates the common case in favour of an uncommon case that can be solved in a different way once the common case is dealt with. I believe Mark Baker has a different perspective on this, one which I would like to understand better. On Sat, 2007-02-24 at 18:40 +0000, Bill de hOra wrote: > Benjamin Carlyle wrote: > > As I will point out later in the document, I > > don't think RDF is as conducive to good vocabulary evolution as > XML. > XML isn't conducive to vocabulary evolution either. This is very > strange > juxtaposition. Most XML vocabulairies I've seen that declare an > extensibility based end up defining a subset of what RDF defines. I think the evidence says otherwise. We have html and other formats to demonstrate that the basic approach behind good XML development works. The important rules seem to be: * Use must-ignore semantics for anything that is not understood * Don't define new namespaces for extensions, so the extensions can one day be merged back into the base document type * Attack a specific problem space, align communities behind the common brand-name, and hammer things out until it all interoperates I'm not sure whether or not we have evidence of RDF vocabularies that have survived similar kinds of pressures, though FOAF may be an example. > > RSS was defined in terms of RDF so that it > > could be easily aggregated. However, aggregation did not happen at > the > > RDF level in practice. Instead, RSS was aggregated at a higher > level. > But you don't say why that was. Why was that? I would guess: Because it wasn't useful. Because the graph structure is too low-level to meet application-specific data integration requirements automatically. Do you have any alternative thoughts on that? > > Must-ignore semantics mean that a document with additional elements > will > > be ignored by old implementations. > mI in my mind is about having having a trailing "else" in the code > that > logs to disk instead of throwing an exception. It's a sensible > programmatic default. The evidence seems to suggest that mI is critical to long-term evolution of documents. It is about handling messages from the future and from the past: Only require information if you need it to function. Ignore what you don't understand. > > This allows new versions of the > > document type to be deployed without breaking the architecture. It > also > > allows extensions to be added for various purposes. If we continue > to > > use mime we can be specific about particular kinds of subclasses. > For > > example, I might sub-class atom for the special purpose of > indicating > > the next three trains that will arrive at a railway station: > > application/pids+atom+xml. > > > > RDF isn't really as flexible. > I can't agree. RDF's handling of unknown triples is far more flexible > than mI. Could you provide some examples of this? > [aside: it's weird to watch people argue up the uniform interface as > a > key constraint of REST, but happily rail on uniform data. ] This was part of Mark's recent statements. I would like to attack the issue from a specific direction, and that is application-to-application interoperability. One of my impressions from WSEC was that there wasn't a great maturity of understanding about the uniform interface being displayed around the room. Everyone was looking for the practical benefits of specific methods, which is fine, but weren't quite seeing the benefits of uniform interfaces in general. One voice in the room asked why he should care about uniform methods, when the component that recieves a message still has to understand the whole thing. He didn't see the point of using a uniform method vs an ad hoc method when the whole message still had to be understood in a very specific way... and the thing is that in a static architecture he is exactly right. The uniform interface doesn't offer a fundamental benefit in a static architecture. It is only as we evolve our architectures and allow different webs to interact with each other that the key rule takes effect, and that is: * The kinds of interactions in an architecture and the kinds of data transferred in the interactions should be decoupled from each other. That is to say, the set of methods and the set of content types should be decoupled from one another. The reason for this is that they vary at different rates. I am very rarely going to need to need to add new methods or return codes to form new interactions in the architecture, but very often going to need to add new kinds of information. I am very often going to need to add new content types. The goal of application-to-application integration is to constrain the kinds of message that are sent around an architecture so that the messages can be understood wherever they arrive. Whenever the data schemas of two components line up, I should be able to configure them to talk to have specific kinds of interactions with each other. I might want them to have the GET interaction, or the PUT, or the SUBSCRIBE. The thing is that uniform methods are just an underpinning for uniform interactions, and that uniform data is still required. I see the claim that RDF provides uniform data, but it really doesn't. It doesn't any more than XML provides uniform data. It just provides a uniform way of creating different data types. Uniform data only comes about with RDF when you add vocabulary to it. Uniform data only comes about with XML when you add both vocabulary and structure to it. Thus, I suggest that RDF and REST are not an automatic fit to each other. It is necessary to prove that RDF facilitates better ways of constructing uniform kinds of data than XML does. RDF's uniform structure is not in and of itself a clear win for REST. Benjamin.
On 01/03/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > On Sun, 2007-02-25 at 21:53 +0100, Danny Ayers wrote: > > Wow, what a thread. I'll respond at greater length once I've re-read a > > couple of times and thought a bit... but there is one point I can pick > > up on right away, from Benjamin: > > [[ > > I challenge the effectiveness of RDF on a number of points > > * The effectiveness of the graph structure for conveying data machine > > to > > machine > > ]] > > The Web is a graph structure. > > That's fine in the abstract sense, More than that, my user agent (even if it's just a browser) can wander around that graph. but > * An atom document has an atom structure > * A html document has a html structure Both of these will describe part of a graph-shaped model, because of the links they contain. > * A train list document has a train list structure Ok, a list can be expressed directly in a tree or a graph. > These are the structures I really want to get at when I process > information from another component in the network. Are all your local data models trees? If these are encoded > directly in XML I can extract this information use tree-walking > algorithms. If they are encoded into RDF I need a tool that does > tree-walking to build up an RDF graph, then I need to do graph-walking > to build the structure I really want to extract. Same question as above. Note also that there are several non-XML RDF syntaxes, and that many non-RDF syntaxes can be interpreted directly as RDF (e.g. Raptor has an Atom parser). [snip] > If it is a prerequisite of the machine-processable web to have fully > self-describing documents, then we can always translate these to RDF for > our storage needs if we really want to. In the mean-time, I would > suggest that RDF complicates the common case in favour of an uncommon > case that can be solved in a different way once the common case is dealt > with. I would see that the other way around, that RDF doesn't complicate the common case because there's no conflict with passing around XML. But when you need to integrate data across domains, RDF is mighty handy. > I believe Mark Baker has a different perspective on this, one which I > would like to understand better. Me too :-) > I see the claim that RDF provides uniform data, but it really doesn't. > It doesn't any more than XML provides uniform data. It just provides a > uniform way of creating different data types. Uniform data only comes > about with RDF when you add vocabulary to it. Uniform data only comes > about with XML when you add both vocabulary and structure to it. RDF uses a uniform naming scheme for entities and relationships between entities, the same naming scheme as the web, URIs. > Thus, I suggest that RDF and REST are not an automatic fit to each > other. It is necessary to prove that RDF facilitates better ways of > constructing uniform kinds of data than XML does. RDF's uniform > structure is not in and of itself a clear win for REST. RDF is a data model designed with the web in mind, XML is a document format. Both can used to advantage on the web. Cheers, Danny. -- http://dannyayers.com
--- Chuck Hinson <chuck.hinson@...> wrote: > I'm having trouble understanding the interoperability issue. This is > HTTP we're talking about here. How many different web servers do we > have nowadays? App servers? Servlet containers? Maybe I haven't > been > paying attention, but I didn't realize there were any major > interoperability issues between them - my browser seems to work with > just about all of them with no problems. Why would having multiple > REST frameworks cause interoperability issues? It still happens.... See http://www.stucharlton.com/blog/archives/000126.html for an example of misinterpreting HTTP and MIME... Cheers Stu ____________________________________________________________________________________ Need Mail bonding? Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users. http://answers.yahoo.com/dir/?link=list&sid=396546091
Comments inline. --- Benjamin Carlyle <benjamincarlyle@...> wrote: > That's fine in the abstract sense, but > * An atom document has an atom structure > * A html document has a html structure > * A train list document has a train list structure > > These are the structures I really want to get at when I process > information from another component in the network. Ah, but that's what you want, because it seems you're taking the view of a developer. One of the desirable properties in an information system is to have "relevant" data on top. What's relevant is in the eye of the beholder. If one is dealing with application-level consumption of data, it makes sense to see application-level structures. If one is doing _data management_ , or logical analysis of data, it makes sense to see it in an application-INdependent structure like relations or a graph, so that first-order predicate logic can be applied to the analysis. The former is "classic REST", the latter seems to be the "RESTful semantic web" that TBL traveling towards with Tabulator. This is similar to the old argument of object vs. relational databases. Most object databases had to re-implement set theory and FOPL to work on top of the applicatoin-biased domain models, if the business were to get any independence of data. Cheers Stu ____________________________________________________________________________________ Expecting? Get great news right away with email Auto-Check. Try the Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html
Benjamin Carlyle wrote:
> On Sat, 2007-02-24 at 18:40 +0000, Bill de hOra wrote:
>> Benjamin Carlyle wrote:
>>> As I will point out later in the document, I
>>> don't think RDF is as conducive to good vocabulary evolution as
>> XML.
>> XML isn't conducive to vocabulary evolution either. This is very
>> strange
>> juxtaposition. Most XML vocabulairies I've seen that declare an
>> extensibility based end up defining a subset of what RDF defines.
>
> I think the evidence says otherwise.
Maybe you do; but I think what I've said is objectively true.
> We have html and other formats to
> demonstrate that the basic approach behind good XML development works.
> The important rules seem to be:
> * Use must-ignore semantics for anything that is not understood
RDF: check, but also at a higher level than parsing; RDF's notion of mI
extends into querying for example.
> * Don't define new namespaces for extensions, so the extensions can one
> day be merged back into the base document type
RDF says to define new nouns and relations if you need them. But so does
XML+xmlns in practice. Atom's foreign markup policy is arguably different.
> * Attack a specific problem space, align communities behind the common
> brand-name, and hammer things out until it all interoperates
For the XML family, this is written down, where? Look now at what's
going with HTML5; the markup community (well, people like hixie) have
realised that how HTML is actually processed is not documented. Not
anywhere. The point I'm making is that XML doesn't begin to address
these issues because XML does not provide a processing model worth
talking about (which is equally a feature).
>
>>> RSS was defined in terms of RDF so that it
>>> could be easily aggregated. However, aggregation did not happen at
>> the
>>> RDF level in practice. Instead, RSS was aggregated at a higher
>> level.
>> But you don't say why that was. Why was that?
>
> I would guess: Because it wasn't useful. Because the graph structure is
> too low-level to meet application-specific data integration requirements
> automatically. Do you have any alternative thoughts on that?
I probably agree it wasn't useful, but I don't know what that has to do
with 'low-level' - Atom is 'low-level' compared to RSS1.0 on almost any
axis (except the encoding of content). I'd tend assume RSS1.0 isn't
treated as RDF because people don't need or want what RDF can provide.
[Fwiw, most RSS files get treated as dictionaries not trees, the 'tree'
is a surface feature of the XML. Atom is deliberately designed that way.]
>
>>> Must-ignore semantics mean that a document with additional elements
>> will
>>> be ignored by old implementations.
>> mI in my mind is about having having a trailing "else" in the code
>> that
>> logs to disk instead of throwing an exception. It's a sensible
>> programmatic default.
>
> The evidence seems to suggest that mI is critical to long-term evolution
> of documents. It is about handling messages from the future and from the
> past: Only require information if you need it to function. Ignore what
> you don't understand.
I think we're agreeing with each other (I see sensible programmatic
defaults as critical to long term evolution of software). But I think
you need to play around with RDF some more; technically it's way ahead
of what mI gives you. This is not a wild claim, nor sure you think I'm
saying mI isn't an optimal default for XML work.
>>> RDF isn't really as flexible.
>> I can't agree. RDF's handling of unknown triples is far more flexible
>> than mI.
>
> Could you provide some examples of this?
Sure. For dataloading I can accept FOAF, RSS1.0, SKOS and OWL formats
without adapting the content on the way in - it will just load. I can
add SKOS extensions to a - FOAF file. I can add RSS1.0 to DOAP or DOAP
to RSS1.0.
With XML I have think about attributes v elements, namespace pollution,
processing policies, empty/not-present, inclusion, entities, ids,
restrictions/orderings that result from DTDs. All entirely incidental to
the information and all varying along the toolchain. I can barely embed
XML in XML without defining a policy as to how that's going to compose.
And with XML, I need to handle each format differently.
What I can't do with RDF are the following:
- keep a track of which facts/triples came from where
- keep a track of data versioned over time
but neither can XML or RDBMSes out of the box.
There are also problems with the level of flexibility RDF gives you when
it comes to getting data back out, but I'll keep that discussion down to
a link which alludes to the issues I've seen:
http://www.dehora.net/journal/2007/02/off_by_one.html
[Please note for both RDF and XML, I'm not saying these are problems;
they're more like costs of doing business/barriers to entry. There's no
perfect language.]
> One voice in the room asked why he should care about uniform methods,
> when the component that recieves a message still has to understand the
> whole thing. He didn't see the point of using a uniform method vs an ad
> hoc method when the whole message still had to be understood in a very
> specific way... and the thing is that in a static architecture he is
> exactly right.
No, he'd still be wrong. What WS types don't always appreciate is that
REST types make a distinction between the application layer
(connector/interface semantics) and everything above that (processing
about the world that is the case).
Enterprise and Internet types basically do not have a shared definition
of what 'application' implies. It doesn't help that REST types speak in
tongues a lot of time about 'implementation details' and 'engines of
application state'.
[In a spurious analogy, I see developers and release managers as doing
much the same thing when they talk about 'configuration'.]
> The goal of application-to-application integration is to constrain the
> kinds of message that are sent around an architecture so that the
> messages can be understood wherever they arrive.
As an aside - I think I know what you're saying, but I don't think I
agree. I see application protocols as largely solving an economic
problem with distributed systems. Keeping connectors logically separate
and having uniform *interface* semantics solves the same problem screw
threads, and containers, and plug sockets do. They push variation into a
better place for markets to deal with.
> Whenever the data
> schemas of two components line up, I should be able to configure them to
> talk to have specific kinds of interactions with each other. I might
> want them to have the GET interaction, or the PUT, or the SUBSCRIBE. The
> thing is that uniform methods are just an underpinning for uniform
> interactions, and that uniform data is still required.
I 'm having difficulty believing you agree this a good thing, but that a
uniform model of data isn't. RDF does exactly this for data interchange,
in a way that media types simply can't provide. For starters, media
types are extrinsic to the data.
> It just provides a
> uniform way of creating different data types.
RDF doesn't have an expressive enough type system to do that.
> Uniform data only comes
> about with RDF when you add vocabulary to it.
I disagree. Uniformity in RDF comes about through its model theory and
the consequent processing/inference that allows.
> Uniform data only comes
> about with XML when you add both vocabulary and structure to it.
You also need to provide policies for versioning, extension, etc. In
general you need to state processing models and lay down best practices
for XML based formats. One of the problems with XML based formats is
that people don't tend to write down what these are, and agreement needs
to be assumed into tools over a series of iterations ("hammering things
out" as you put it).
This the the kind of lack of attention to detail which has almost killed
off WS-. Even if the Web/REST didn't exist as an alternatives, SOAP/WS-*
exhibits massive variance in *implementations*, not just at the
interface boundaries. Whereas the very definition and essence of RDF is
in its processing model.
The graph/tree thing is a red-herring.
> It is necessary to prove that RDF facilitates better ways of
> constructing uniform kinds of data than XML does. RDF's uniform
> structure is not in and of itself a clear win for REST.
I'm sorry, but I think your conclusions only follow because your
technical notions of what RDF is are misguided. Claiming that XML and
RDF have equivalent expressive power is very strange to me, unless "XML"
means "XML, it's family of specs, every useful XML based format, and
all the open source code ever written to handle those formats" (which
I've had to argue over before, more than once).
I see RDF's limited adoption as largely a social matter, and in part due
to it not fitting in well with the current computing landscape. We
_build_ very large distributed software systems much the same way
termites build their homes. RDF is a bit like giving termites dinky toy
JCBs.
I'll finish with one point (if you to dispute what I've said here, go
for it). RDF is simple in the way mathematics is simple, it's not simple
the way really simple syndication is simple. It requires effort to
understand why it could be valuable, effort to get past cargo-cult
notions about data interchange issues, effort to understand why
technologies like RDF don't get deployed (there's a very clear history
of this), and still more effort to maintain any kind of precision and
clarity when talking about it. Being precise about data modeling is
boring; text encoding issues and OSS licences are a joke a minute by
comparison.
cheers
Bill
Hi, all, The scenario is as follows: Suppose that a request from a client need to be processes by service A and then the result is either forwarded to service B or back to the client based on the original request. WS-addressing can help to solve this. But I am not sure about a better REST way for it. What I can think is to include the URL of service B in the XML posted to service A. Cheers, Dong
I have my mind pretty much made up already, but is there any justification for putting an API key (as usually required for a public-facing Web API) into the URI? I can't think of any reason why it would preferable to a custom HTTP header. Thanks, Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Stefan Tilkov <stefan.tilkov@...> writes: > I have my mind pretty much made up already, but is there any > justification for putting an API key (as usually required for a > public-facing Web API) into the URI? I can't think of any reason why > it would preferable to a custom HTTP header. No reason at all. This is just the authentication issue in another guise. Worse than that, it's a "you can't use our possibly stateless API except by adding some state.... err.... duh" case. -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk
[ Attachment content not displayed ]
Stefan Tilkov wrote: > > > I have my mind pretty much made up already, but is there any > justification for putting an API key (as usually required for a > public-facing Web API) into the URI? I can't think of any reason why > it would preferable to a custom HTTP header. Only unless you absolutely had to, to get something done for next week. I'm not sure what that would be ;) Are you extending this thinking to phpsessionid, jsessionid and friends? cheers Bill
Stefan Tilkov wrote: >I have my mind pretty much made up already, but is there any >justification for putting an API key (as usually required for a >public-facing Web API) into the URI? I can't think of any reason why >it would preferable to a custom HTTP header. > > If the resource depended on the API key (e.g., were customized / localized / branded) it might be reasonable. (Or if you're trying to provide a read-only REST-like interface for cross-domain JSONP GETs so there's no other way to pass the key, maybe, though that's ugly.) -John
That's a good reason, thanks. Stefan On Mar 3, 2007, at 12:45 PM, Michael Walter wrote: > It makes it easy to explore the API in a browser (as would any kind > of standard browser-supported authentication mechanism, opposed to > a custom HTTP header). > > Regards, > Michael > > > On 3/3/07, Stefan Tilkov <stefan.tilkov@...> wrote: > I have my mind pretty much made up already, but is there any > justification for putting an API key (as usually required for a > public-facing Web API) into the URI? I can't think of any reason why > it would preferable to a custom HTTP header. > > Thanks, > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > >
On Sat, 3 Mar 2007, Stefan Tilkov wrote:
> That's a good reason, thanks.
>
> Stefan
>
> On Mar 3, 2007, at 12:45 PM, Michael Walter wrote:
>
> > It makes it easy to explore the API in a browser (as would any kind
> > of standard browser-supported authentication mechanism, opposed to
> > a custom HTTP header).
How about both? Generally you want to enable some form of strictness but
also enable people to actually do things with the rather lame tools that
exist in the world. So anywhere you think you want to use a header, you
might also want to enable a way to dork it into the URI (query string,
whatever). API key, accept header, and method are good candidates.
Accept makes it easy to request other representations from the browser,
useful for debugging and exploring. Method is useful for situations like
Safari's lack of PUT and DELETE support in XMLHttpRequest.
I've worked with accept and method, but not API key. We've chosen to
do basic auth and cookie handling but are hearing requests for keys so
we may go that way too.
--
Chris Dent http://burningchrome.com/~cdent/mt
[...]
Yes, one could allow for an optional apikey= query parameter instead of the header. The problem with this, though, is that one can't prevent people to use this not optionally, but as the default. Even if they only use this for development, this means they'd be using something different for development/testing than in production. Maybe the right way is to not require an API key when test data on the server side is accessed, but do so in production mode. Stefan On Mar 3, 2007, at 9:30 PM, Chris Dent wrote: > On Sat, 3 Mar 2007, Stefan Tilkov wrote: > > > That's a good reason, thanks. > > > > Stefan > > > > On Mar 3, 2007, at 12:45 PM, Michael Walter wrote: > > > > > It makes it easy to explore the API in a browser (as would any > kind > > > of standard browser-supported authentication mechanism, opposed to > > > a custom HTTP header). > > How about both? Generally you want to enable some form of > strictness but > also enable people to actually do things with the rather lame tools > that > exist in the world. So anywhere you think you want to use a header, > you > might also want to enable a way to dork it into the URI (query string, > whatever). API key, accept header, and method are good candidates. > > Accept makes it easy to request other representations from the > browser, > useful for debugging and exploring. Method is useful for situations > like > Safari's lack of PUT and DELETE support in XMLHttpRequest. > > I've worked with accept and method, but not API key. We've chosen to > do basic auth and cookie handling but are hearing requests for keys so > we may go that way too. > > -- > Chris Dent http://burningchrome.com/~cdent/mt > [...] > >
Stefan Tilkov <stefan.tilkov@...> writes: > Yes, one could allow for an optional apikey= query parameter instead > of the header. The problem with this, though, is that one can't > prevent people to use this not optionally, but as the default. Even > if they only use this for development, this means they'd be using > something different for development/testing than in production. > > Maybe the right way is to not require an API key when test data on > the server side is accessed, but do so in production mode. API keys are just authentication tokens aren't they? Why can't you use authentication to do this job? -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk
On 3/2/07, Dong Liu <edongliu@...> wrote: > Hi, all, > > The scenario is as follows: > > Suppose that a request from a client need to be processes by service A > and then the result is either forwarded to service B or back to the > client based on the original request. > > WS-addressing can help to solve this. But I am not sure about a better > REST way for it. What I can think is to include the URL of service B > in the XML posted to service A. WS-A is surprisingly less helpful than you think. You need to be sure that all participants in the conversation are using the same version of WS-A, that they have consistent implementations (which is hard to do in the absence of any meaningful tests), especially of the async stuff like the faulting bits. Then you need to have a client whose hostname remains constant over time, and with ports accessible to the caller. Believe me, its hard to get working and doesnt like laptops or firewalls very well. Maybe there's a better way to model the interaction, instead of requests-routed-to-a-service, but as conversation about resources, where the conversation itself becomes something you can refer to, the client polling it for state changes, its URL being passed around to interested parties. -steve
Bill Venners wrote: > I think there is something to what Mike is trying to say. I > agree with what seems to be the majority here that the > "froth" of multiple competing implementations is good in that > it encourages innovation and enables multiple tools focused > on different needs, but it can also confuse the marketplace. > For example, Python has a bunch of web frameworks that > compete. Ruby, by contrast, has Rails, which dominates so > much that you don't hear about any other Ruby web framework. Thanks for recognizing this. (Sorry for the late reply, I just found this email; need to update my inbox rules...) You explained it far more succinctly than I was able to. > I think that actually helps promote Ruby over Python to some > extent. So even though froth is good in general, there is a tradeoff. Exactly, and as I have been trying to decide [1] on a new programming language, *everything* points me to Python *except* that lack of critical mass around any one framework. I look at Rails and, though I don't like the language or the 'religious ferver' of the Rails adherents, I see how almost everyone using Ruby seems to be supporting Rails and recognize the huge benefits that has. In an similar manner, if REST ends up with tens of incompatible frameworks for each language it will become the Python to SOAP's Ruby on Rails; the better way but with so many incompatible solutions it will gain far less traction than it deserves. JMTCW anyway. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..." [1] http://www.mikeschinkel.com/blog/onthehuntforanewprogramminglanguage
Bill de hOra wrote:
> There's another point. If you are asking clients to
> generate uids to stuff into URLs, then you are breaking
> with the idea that URLs are opaque to clients. This might
> or might not be a problem, but it's such an important
> principle it has to asked. For example, without machine
> readable deployments of URI templates, how do I know to
> compose the URL
>
> {http://example.com/}{myuniqueid}
>
> in a way that doesn't bake in the first part?
Here's a thought/question: The URI Opacity Axiom was conceived prior to the
existence of URI Templates. Assuming URI Templates makes it's way to a
recommendation/standard (not sure the correct term these days), is it
possible that its existence would rightfully cause the URI Opacity Axiom to
need be reconsidered in certain contexts where URI Templates and guidance
for template variable substitution is provided?
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
"It never ceases to amaze how many people will proactively debate away
attempts to improve the web..."
Benjamin Carlyle wrote: > Let me have a quick go at debunking RDF. I have put a few > years thought into this. While I know that what I am > about to say works directly against deep assumptions of > current semantic web proponents, I believe it is grounded > in reality. I also believe that we need to face these > issues and come up with some good answers in order to > actually achieve the semantic web. > > <snip> From someone who just started exploring core web technologies and the semantic web mid-2006 but having since immersed myself into learning these things, I agree with the overall theme of your message though I dared not say it prior till now. (A point of note: I cannot definitely state agreement with every point you made because I honestly still don't have the background to fully appreciate all points.) However, since looking at the all the core web technologies I've seen an elegance in many and a painfulness in others. The former have seen adoption like a snowball with critical mass left to roll down a hill whereas others seem to have had the efforts of Sisyphus trying to roll the boulder up the mountain only to them always roll back down the hill. URLs, HTTP, HTML, RSS, and microformats have seemed to be the former, RDF and in many ways XML and XHTML the latter. I still can't put my finger on exactly why I feel the way I do about each of those technologies, but I think it has to do with the general approachability of the former, and the lack thereof for the latter. FWIW. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
On 3/3/07, Mike Schinkel <mikeschinkel@...> wrote: > Here's a thought/question: The URI Opacity Axiom was conceived prior to the > existence of URI Templates. Assuming URI Templates makes it's way to a > recommendation/standard (not sure the correct term these days), is it > possible that its existence would rightfully cause the URI Opacity Axiom to > need be reconsidered in certain contexts where URI Templates and guidance > for template variable substitution is provided? URIs are only as opaque as the standards around them deem them to be. For example, RFC 2617 (Basic and Digest) requires the client to look into the path of the URIs being accessed since a client is allowed to send credentials preemptively to a path deeper in the tree from a point that has already be authenticated. Similarly HTML requires a client to construct URIs, for example, from a form with method="GET". One of the original violations of URI opacity was a certain browser seeing .html at the end of a URI and assuming that the content returned was HTML, which is obviously not supported by any standard. -joe -- Joe Gregorio http://bitworking.org
Does anyone else find the use of the acronym "APP" confusing as opposed to just "Atom?" Is there a reason they are different? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
On Mar 3, 2007, at 11:30 PM, Nic James Ferrier wrote: > API keys are just authentication tokens aren't they? > > Why can't you use authentication to do this job? > Plain authentication would me my #1 choice, but it might be used independently from the API key - i.e. the same user might use different third parties to interact with the API, so each time there'd be the same authentication information, but different API keys. Which brings up another topic ... worth another post. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ > > -- > Nic Ferrier > ---------------------------------------------------------- > Need a linux/java/python/web hacker? I'm in need of work! > ---------------------------------------------------------- > http://www.tapsellferrier.co.uk >
"Atom" commonly refers to the Atom Syndication Format (the RSS alternative) http://www.atomenabled.org/developers/syndication/atom- format-spec.php "APP" is the "Atom Publishing Protocol" (the REST API), http:// www.atomenabled.org/developers/syndication/atom-format-spec.php Stefan On Mar 4, 2007, at 8:42 AM, Mike Schinkel wrote: > Does anyone else find the use of the acronym "APP" confusing as > opposed to > just "Atom?" Is there a reason they are different? > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org - http://t.oolicio.us > "It never ceases to amaze how many people will proactively debate away > attempts to improve the web..." > > >
Stefan Tilkov: > "Atom" commonly refers to the Atom Syndication Format > (the RSS alternative) > http://www.atomenabled.org/developers/syndication/atom- > format-spec.php "APP" is the "Atom Publishing Protocol" > (the REST API), http:// > www.atomenabled.org/developers/syndication/atom-format > -spec.php > Am I going crazy, or didn't you just differentiate the two by giving me the same URL for both? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
Stefan Tilkov <stefan.tilkov@...> writes: > On Mar 3, 2007, at 11:30 PM, Nic James Ferrier wrote: > >> API keys are just authentication tokens aren't they? >> >> Why can't you use authentication to do this job? >> > > Plain authentication would me my #1 choice, but it might be used > independently from the API key - i.e. the same user might use > different third parties to interact with the API, so each time > there'd be the same authentication information, but different API > keys. Which brings up another topic ... worth another post. Eh? The API might be hosted in different places and the same user might make connections to those different hosts? But then it's just different authentication details isn't it? Sorry - am I being dense? -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk
Mike asks: > Does anyone else find the use of the acronym "APP" confusing as opposed to > just "Atom?" Is there a reason they are different? Though I'm not too partial to the APP acronym, given how oftern the term "app" is uses in the IT field, there does need to be a distinction between the Atom XML format (what feeds and entries look like) and the Atom Publishing Protocol (how to use HTTP/REST to access/manipulate feeds/entries), since they are different things altogether. Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
Sorry, my explanation wasn't really understandable. I meant that an entity A might offer an API to access its services. Entities B and C might use this API, with APIKey(B) and APIKey(C) respectively. A user of A might grant B and/or C the right to interact with A on his or her behalf ... so a call from B to A would carry APIKey(B) and the user's credentials, and C would do the same, but use APIKey(C). A would be able to identify who is using the API (B or C) and also whether they're authorized to access the data of the user. Stefan On Mar 4, 2007, at 10:42 AM, Nic James Ferrier wrote: > Stefan Tilkov <stefan.tilkov@...> writes: > >> On Mar 3, 2007, at 11:30 PM, Nic James Ferrier wrote: >> >>> API keys are just authentication tokens aren't they? >>> >>> Why can't you use authentication to do this job? >>> >> >> Plain authentication would me my #1 choice, but it might be used >> independently from the API key - i.e. the same user might use >> different third parties to interact with the API, so each time >> there'd be the same authentication information, but different API >> keys. Which brings up another topic ... worth another post. > > Eh? > > The API might be hosted in different places and the same user might > make connections to those different hosts? > > But then it's just different authentication details isn't it? > > Sorry - am I being dense? > > -- > Nic Ferrier > ---------------------------------------------------------- > Need a linux/java/python/web hacker? I'm in need of work! > ---------------------------------------------------------- > http://www.tapsellferrier.co.uk >
My apologies - copy & paste error. APP is at http://bitworking.org/projects/atom/draft-ietf-atompub- protocol-13.html Stefan On Mar 4, 2007, at 9:25 AM, Mike Schinkel wrote: > Stefan Tilkov: > > "Atom" commonly refers to the Atom Syndication Format > > (the RSS alternative) > > http://www.atomenabled.org/developers/syndication/atom- > > format-spec.php "APP" is the "Atom Publishing Protocol" > > (the REST API), http:// > > www.atomenabled.org/developers/syndication/atom-format > > -spec.php > > > > Am I going crazy, or didn't you just differentiate the two by > giving me the > same URL for both? > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org - http://t.oolicio.us > "It never ceases to amaze how many people will proactively debate away > attempts to improve the web..." > > >
Stefan Tilkov <stefan.tilkov@...> writes: > Sorry, my explanation wasn't really understandable. > > I meant that an entity A might offer an API to access its services. > Entities B and C might use this API, with APIKey(B) and APIKey(C) > respectively. A user of A might grant B and/or C the right to > interact with A on his or her behalf ... so a call from B to A would > carry APIKey(B) and the user's credentials, and C would do the same, > but use APIKey(C). A would be able to identify who is using the API > (B or C) and also whether they're authorized to access the data of > the user. I don't see why this stops you using authentication to provide access to the API. Indeed, this is just the model I'm trying to build for OpenID authentication with http://prooveme.com What's the problem? -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk
Andrzej Jan Taramina wrote: > > > Mike asks: > > > Does anyone else find the use of the acronym "APP" confusing as > opposed to > > just "Atom?" Is there a reason they are different? I think "APP" could be confusing, but it's handier than typing in "Atom Protocol". cheers Bill
Mike Schinkel wrote:
>
> Here's a thought/question: The URI Opacity Axiom was conceived prior to the
> existence of URI Templates. Assuming URI Templates makes it's way to a
> recommendation/standard (not sure the correct term these days), is it
> possible that its existence would rightfully cause the URI Opacity Axiom to
> need be reconsidered in certain contexts where URI Templates and guidance
> for template variable substitution is provided?
Not generally. Joe pointed out some examples where clients inspect urls
(I'd forgotten about https: :\). I think it would depend on the
application involved. Or for general purposes, URI templates would need
to extended to become dictionaries and have named segments:
{base="http://example.com/"}{id="myuniqueid"}
ie, like a real template/binding language. Which I think is getting
complicated and EPRish.
The last time this got interesting was when Jon Udell started
integrating libraries and amazon by scraping ISBNs out of Amazon URLs. I
never head of anybody directly criticizing what he was doing; maybe it
was because it was an insanely useful integration, despite the means.
cheers
Bill
Wouldn't that be client identifier - similar to user-agent in HTTP? > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Nic James Ferrier > Sent: Sunday, March 04, 2007 8:10 AM > To: Stefan Tilkov > Cc: REST Discuss > Subject: Re: [rest-discuss] Design question: "API Key" in URI > > Stefan Tilkov <stefan.tilkov@...> writes: > > > Sorry, my explanation wasn't really understandable. > > > > I meant that an entity A might offer an API to access its > services. > > Entities B and C might use this API, with APIKey(B) and APIKey(C) > > respectively. A user of A might grant B and/or C the right > to interact > > with A on his or her behalf ... so a call from B to A would carry > > APIKey(B) and the user's credentials, and C would do the > same, but use > > APIKey(C). A would be able to identify who is using the API > (B or C) > > and also whether they're authorized to access the data of the user. > > I don't see why this stops you using authentication to > provide access to the API. > > Indeed, this is just the model I'm trying to build for OpenID > authentication with http://prooveme.com > > What's the problem? > > -- > Nic Ferrier > ---------------------------------------------------------- > Need a linux/java/python/web hacker? I'm in need of work! > ---------------------------------------------------------- > http://www.tapsellferrier.co.uk > > > ------------------------ Yahoo! Groups Sponsor > --------------------~--> > See what's inside the new Yahoo! Groups email. > http://us.click.yahoo.com/0It09A/bOaOAA/yQLSAA/W6uqlB/TM > -------------------------------------------------------------- > ------~-> > > > Yahoo! Groups Links > > >
Absolutely, yes - although the "original" user-agent would be lost if this header used for this purpose. But this probably the best way to achieve what I'm looking for. Stefan On Mar 4, 2007, at 7:36 PM, Mike Dierken wrote: > Wouldn't that be client identifier - similar to user-agent in HTTP? > > > -----Original Message----- > > From: rest-discuss@yahoogroups.com > > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Nic James Ferrier > > Sent: Sunday, March 04, 2007 8:10 AM > > To: Stefan Tilkov > > Cc: REST Discuss > > Subject: Re: [rest-discuss] Design question: "API Key" in URI > > > > Stefan Tilkov <stefan.tilkov@...> writes: > > > > > Sorry, my explanation wasn't really understandable. > > > > > > I meant that an entity A might offer an API to access its > > services. > > > Entities B and C might use this API, with APIKey(B) and APIKey(C) > > > respectively. A user of A might grant B and/or C the right > > to interact > > > with A on his or her behalf ... so a call from B to A would carry > > > APIKey(B) and the user's credentials, and C would do the > > same, but use > > > APIKey(C). A would be able to identify who is using the API > > (B or C) > > > and also whether they're authorized to access the data of the > user. > > > > I don't see why this stops you using authentication to > > provide access to the API. > > > > Indeed, this is just the model I'm trying to build for OpenID > > authentication with http://prooveme.com > > > > What's the problem? > > > > -- > > Nic Ferrier > > ---------------------------------------------------------- > > Need a linux/java/python/web hacker? I'm in need of work! > > ---------------------------------------------------------- > > http://www.tapsellferrier.co.uk > > > > > > ------------------------ Yahoo! Groups Sponsor > > --------------------~--> > > See what's inside the new Yahoo! Groups email. > > http://us.click.yahoo.com/0It09A/bOaOAA/yQLSAA/W6uqlB/TM > > ---------------------------------------------------------- > > ------~-> > > > > > > Yahoo! Groups Links > > > > > > > > >
On Mar 4, 2007, at 5:10 PM, Nic James Ferrier wrote: > Stefan Tilkov <stefan.tilkov@...> writes: > >> Sorry, my explanation wasn't really understandable. >> >> I meant that an entity A might offer an API to access its services. >> Entities B and C might use this API, with APIKey(B) and APIKey(C) >> respectively. A user of A might grant B and/or C the right to >> interact with A on his or her behalf ... so a call from B to A would >> carry APIKey(B) and the user's credentials, and C would do the same, >> but use APIKey(C). A would be able to identify who is using the API >> (B or C) and also whether they're authorized to access the data of >> the user. > > I don't see why this stops you using authentication to provide access > to the API. > I meant that I need the auth headers for authenticating the user, and another means to identify (or authenticate) the "agent" (in this case, B or C). > Indeed, this is just the model I'm trying to build for OpenID > authentication with http://prooveme.com > I'm not sure I really understand how prooveme.com works; the FAQ did not really clarify it for me. If you can explain it in this context, it would be very much appreciated, otherwise I'm happy to do some reading on my own first. Stefan > What's the problem? > > -- > Nic Ferrier > ---------------------------------------------------------- > Need a linux/java/python/web hacker? I'm in need of work! > ---------------------------------------------------------- > http://www.tapsellferrier.co.uk >
It might be premature generalization, but... in the most general case, you have at least two entities needing identification: (A) The person initiating the request (B) The agent service (intermediary) taking care of all or part of your request, and needing to make requests on A's behalf, with A's authorization for B to access A's data if necessary. (Obviously you could have additional agents involved in a chain as well, assuming that agents might call on services that need to call on services.) And of course the final service needs to verify that B is authorized to make certain requests on behalf of A at this time. Obviously an agent can take your authentication credentials and impersonate you. This gives B free access to anything A has access too, which is a problem. It also makes revocation and temporary access difficult. The 'right way' to do this is for A to declare that B is authorized to do certain things. Standardizing how you say "B", "authorized", and "things" is helpful in this context. I predict this will be an ongoing discussion on the OpenID mailing list. -John Mike Dierken wrote: > Wouldn't that be client identifier - similar to user-agent in HTTP? > > > > >> -----Original Message----- >> From: rest-discuss@yahoogroups.com >> [mailto:rest-discuss@yahoogroups.com] On Behalf Of Nic James Ferrier >> Sent: Sunday, March 04, 2007 8:10 AM >> To: Stefan Tilkov >> Cc: REST Discuss >> Subject: Re: [rest-discuss] Design question: "API Key" in URI >> >> Stefan Tilkov <stefan.tilkov@...> writes: >> >> >>> Sorry, my explanation wasn't really understandable. >>> >>> I meant that an entity A might offer an API to access its >>> >> services. >> >>> Entities B and C might use this API, with APIKey(B) and APIKey(C) >>> respectively. A user of A might grant B and/or C the right >>> >> to interact >> >>> with A on his or her behalf ... so a call from B to A would carry >>> APIKey(B) and the user's credentials, and C would do the >>> >> same, but use >> >>> APIKey(C). A would be able to identify who is using the API >>> >> (B or C) >> >>> and also whether they're authorized to access the data of the user. >>> >> I don't see why this stops you using authentication to >> provide access to the API. >> >> Indeed, this is just the model I'm trying to build for OpenID >> authentication with http://prooveme.com >> >> What's the problem? >> >> -- >> Nic Ferrier >> ---------------------------------------------------------- >> Need a linux/java/python/web hacker? I'm in need of work! >> ---------------------------------------------------------- >> http://www.tapsellferrier.co.uk >> >> >> ------------------------ Yahoo! Groups Sponsor >> --------------------~--> >> See what's inside the new Yahoo! Groups email. >> http://us.click.yahoo.com/0It09A/bOaOAA/yQLSAA/W6uqlB/TM >> -------------------------------------------------------------- >> ------~-> >> >> >> Yahoo! Groups Links >> >> >> >> > > > > > > Yahoo! Groups Links > > > >
Stefan Tilkov <stefan.tilkov@...> writes: > I meant that I need the auth headers for authenticating the user, and > another means to identify (or authenticate) the "agent" (in this > case, B or C). Right. I was intimating that the user needs to create a special set of authentication details to give to agents. >> Indeed, this is just the model I'm trying to build for OpenID >> authentication with http://prooveme.com >> > > I'm not sure I really understand how prooveme.com works; the FAQ did > not really clarify it for me. If you can explain it in this context, > it would be very much appreciated, otherwise I'm happy to do some > reading on my own first. prooveme.com is an OpenID provider based on client certs. You attempt to login to an OpenID site with one of our IDs (or a delegate) and the authentication is done with your certificate. But the really clever bit is what we're working on now: we can let you create more certificates for specific purposes and give them away to other entities. For example, if you want to let flikr login to blogger then you create a certificate that allows authentication only to blogger and then you give that certificate to flikr. When you want to stop flikr doing that you can revoke the certificate. This OpenID multi-access model doesn't have to use certificates, but it does require your ID and at least one secure token of information. Certificates could be used... usernames and passwords could be used. Of course, we (the prooveme.com team) have to get flikr to agree to use client certs to authenticate... but we think we'll be able to do that. Does that explain it? I think this is on-topic btw. It's all part of (quite) RESTfull APIs. It certainly seems a lot better than sticking crypto in a document being pushed around over some crazy WS-* protocol. -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk
This was one of the reasons why I liked the idea of using xforms as
machine to machine hypermedia
<http://tech.groups.yahoo.com/group/rest-discuss/message/7298> . You
GET an xform... it runs in an xforms processor driven by an API. Your
client interacts with the simple controls via an API. Say "set control x
to 1 and set control y to 2, now submit". The xform hides all the
details of the URI template (I don't think xforms supports full URI
templating today but it could with tweaks) and/or the XML format that is
POSTed.
Another alternative would be to GET a javascript library that works in a
similar way, except there you would call functions. You could do this
from C or Java code with spidermonkey and rhino. Its code on demand that
runs in a little sandbox. The only resource that it would have access to
is XMLHttpRequest.
I think both of these alternatives aren't ideal but in general I feel
the idea of hypermedia and/or code on demand for machine to machine
interaction is something worth considering. Basically, allow a client to
GET something that provides a machine interface to your web application.
I don't expect clients to be "smart" enough to automatically figure out
the interface at all. On first GET you'd have to inspect the interface
and write the code to work with it. The basic idea though is that the
interface you are working with is not the URI+HTTP+form data interface,
which is similar to what HTML gives you.
I know this isn't even close to a fully baked idea, but what is the
downside in the general approach here? When I first started reading
about REST this general direction struck me as the "Restful" way to do
machine to machine interaction. But nobody seems to be going in this
direction. What am I missing?
--- In rest-discuss@yahoogroups.com, "Mike Schinkel" <mikeschinkel@...>
wrote:
>
> Bill de hOra wrote:
> > There's another point. If you are asking clients to
> > generate uids to stuff into URLs, then you are breaking
> > with the idea that URLs are opaque to clients. This might
> > or might not be a problem, but it's such an important
> > principle it has to asked. For example, without machine
> > readable deployments of URI templates, how do I know to
> > compose the URL
> >
> > {http://example.com/}{myuniqueid}
> >
> > in a way that doesn't bake in the first part?
>
> Here's a thought/question: The URI Opacity Axiom was conceived prior
to the
> existence of URI Templates. Assuming URI Templates makes it's way to a
> recommendation/standard (not sure the correct term these days), is it
> possible that its existence would rightfully cause the URI Opacity
Axiom to
> need be reconsidered in certain contexts where URI Templates and
guidance
> for template variable substitution is provided?
>
> --
> -Mike Schinkel
> http://www.mikeschinkel.com/blogs/
> http://www.welldesignedurls.org
> http://atlanta-web.org - http://t.oolicio.us
> "It never ceases to amaze how many people will proactively debate away
> attempts to improve the web..."
>
On 3/4/07, Dong Liu <edongliu@...> wrote: > If the client is in fact a service, and it has a URL, then its URL can > be included in the messages passing around interest party. > > If the client is a user using browser, then maybe her/his email > address or telephone number can be the address that is always accessible. Well, they can both be turned into URIs. But why not go the full way? Why not give the client a URL too? > > I think the client-server notion is always a constraint of RESTful > services. The server exposes services using URL's, but the client may > not have URL's. oh, we can fix that. your clients physical possessions may not be on the network, but it doesnt mean they cant have URLs -its just the act of talking to the devices themselves that has p > The request-response MEP is also a constraint for > messaging. Consider Polling. Scales surprisingly well where the request is a GET or a HEAD with a etag or if-modified-since, and the endpoint remembers to set the expiry time in all responses. Most importantly, it ignores firewalls. if you rely on Async messaging based protocols then to get through firewalls you either need -a relay (e.g. SMTP relays) that provides a structured hole in the firewall -public proxy services (E.g. XMPP servers) These are both nice in that they also decouple routing; roaming and sometimes offline recipients can still get content. There's nothing to stop you running REST over XMPP, incidentally. When you compare to WS-* messaging, you have WS-Events., WS-Eventing and WS-Notification. All of these host a web server on the local system, to await responses. They don't go through firewalls and even though WS-Notification drafts always promised that such was possible (through the miracle that is WS-A). Certainly I've never seen one, and I'm the author of one of the few WS-N impls out there. As a result the only way to interop test my impl against others was to run the tests on a home server, with a public endpoint for responses. How did I get the results from these test runs back to my office desktop? Atom feed of test results -which I can't help find deeply ironic. NB, I never fully criticised WS-A in my first posting, here goes -tests were only written after the spec went into last call; too late for hard-to-test features to be removed. -multiple draft versions out in the field; WS-DM 1.0 actually depends on two different non-final drafts. -nothing in the specs to deal with incoming messages with addresses in different versions -few/no public EPRs for interop testing (WSO2 are proposing some) -outstanding security issues as flagged by Oracle/sun/sonic and others: http://lists.w3.org/Archives/Public/public-ws-addressing/2005May/0028.html the latter implies I could post a request with a fault to address that would cause the endpoint to post a fault back to the specified address, with the SOAP headers of my choice. That could be, well, troublesome. -steve -steve
> > The request-response MEP is also a constraint for messaging. > > Consider Polling. Scales surprisingly well where the request is a GET > or a HEAD with a etag or if-modified-since, and the endpoint remembers > to set the expiry time in all responses. Most importantly, it ignores > firewalls. I've been using an approach where the message retrieved has a link to the next set of messages. This seemed to allow many clients to retrieve messages at their own pace and the result is a bunch of resources that essentially never change - an important property of distributing the caching. http://www.topiczero.com:8080/xmlrouter/ http://www.topiczero.com:8080/xmlrouter/events/chat.html?topic=rest-discuss http://www.topiczero.com:8080/xmlrouter/msgs/rest-discuss
Apropos: http://www.identityblog.com/?p=701 -John
Bill de hOra wrote:
> Or for general purposes, URI templates would need to
> extended to become dictionaries and have named segments:
>
> {base="http://example.com/"}{id="myuniqueid"}
>
> ie, like a real template/binding language. Which I think
> is getting complicated and EPRish.
Actually, I've been assuming seperation of concerns and that URI Templates
would continue down their current path, i.e. that URI Templates would not
constrain or dictate where the values come from to populate the template
thus allowing potentially different solutions to supply those values for
many different contexts. As one such context I can envision a format
designed to be used as the hypermedia component of REST, or even be a third
component that the hypermedia component leverages.
> The last time this got interesting was when Jon Udell
> started integrating libraries and amazon by scraping
> ISBNs out of Amazon URLs. I never head of anybody
> directly criticizing what he was doing; maybe it was
> because it was an insanely useful integration, despite
> the means.
I actually read a lot of bitching about it (though I can't quickly dig up
the links), but the bitching seemed to be about principle while offering no
tagible alternative solution, and ignoring the pragmatism of Jon's design
and more importantly the minimal downsides if it broke. On that latter it
seems to me the magnitude of problem created by violating URI Opacity is
proportional to the amount of deployed code that utilizies the violation. In
Jon's Amazon scrape it was in one place on his server, which he could
quickly fix assuming Amazon did not eliminate the ISBN from the URL, and he
could probably devise a creative fix even if they did eliminate the ISBN
from the URL.
URI Templates combined with a standard for providing the metadata needed to
compose URLs seems to provide what would be needed to keep peace with the
spirit of URI Opacity and also be able to implement more advanced solutions.
FYI, I'm planning to do some research in that area for a project I'm
currently working on.
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
"It never ceases to amaze how many people will proactively debate away
attempts to improve the web..."
Andrzej Jan Taramina wrote: > Though I'm not too partial to the APP acronym, given how > oftern the term "app" is uses in the IT field, there does > need to be a distinction between the Atom XML format (what > feeds and entries look like) and the Atom Publishing Protocol > (how to use HTTP/REST to access/manipulate feeds/entries), > since they are different things altogether. Yes, it really is a regrettable usage. It would have been better had it been something like: - APUB - ARAP (Atom REST Access Protocol) - AAP (Atom Automation Protocol) Or almost anything else for that matter. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
wahbedahbe wrote: > This was one of the reasons why I liked the idea of using > xforms as machine to machine hypermedia . You GET an > xform... it runs in an xforms processor driven by an API. > Your client interacts with the simple controls via an > API. Say "set control x to 1 and set control y to 2, now > submit". The xform hides all the details of the URI > template (I don't think xforms supports full URI > templating today but it could with tweaks) and/or the XML > format that is POSTed. Have you seen my proposal [1] to the WHATWG? > I think both of these alternatives aren't ideal but in > general I feel the idea of hypermedia and/or code on > demand for machine to machine interaction is something > worth considering. I completely agree. > Basically, allow a client to GET > something that provides a machine interface to your web > application. I don't expect clients to be "smart" enough > to automatically figure out the interface at all. Maybe not at first, but I don't see any reason why, via a layering of technology, they couldn't be made to be smart enough. > first GET you'd have to inspect the interface and write > the code to work with it. The basic idea though is that > the interface you are working with is not the > URI+HTTP+form data interface, which is similar to what > HTML gives you. I don't follow up 100%... > I know this isn't even close to a fully baked idea, but > what is the downside in the general approach here? When I > first started reading about REST this general direction > struck me as the "Restful" way to do machine to machine > interaction. But nobody seems to be going in this > direction. What am I missing? Might be because of Roy's influence? I've felt he has been a bit dismissive of such directions because of (what seems to me to be) a concern REST would be thought of as a specification as opposed to an architectural style? (Roy: if I've misinterpretted please forgive; I'm just relaying my impressions.) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
> Yes, it really is a regrettable usage. > > It would have been better had it been something like: > > - APUB > - ARAP (Atom REST Access Protocol) > - AAP (Atom Automation Protocol) > > Or almost anything else for that matter. ATOMIC - ATOM Invocation Communications! Or some such.... ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
Mike Schinkel wrote: > Bill de hOra wrote: > >> The last time this got interesting was when Jon Udell >> started integrating libraries and amazon by scraping >> ISBNs out of Amazon URLs. I never head of anybody >> directly criticizing what he was doing; maybe it was >> because it was an insanely useful integration, despite >> the means. > > I actually read a lot of bitching about it (though I can't quickly dig up > the links), but the bitching seemed to be about principle while offering no > tagible alternative solution, and ignoring the pragmatism of Jon's design > and more importantly the minimal downsides if it broke. I'm all for utilitarian computing. > On that latter it > seems to me the magnitude of problem created by violating URI Opacity is > proportional to the amount of deployed code that utilizies the violation. In > Jon's Amazon scrape it was in one place on his server, which he could > quickly fix assuming Amazon did not eliminate the ISBN from the URL, and he > could probably devise a creative fix even if they did eliminate the ISBN > from the URL. Good point, never thought of it as "deployed only once". cheers Bill
I apologise up front for what will be a rather lengthy post to the group. I would like some feedback on something that I've been thinking hard about recently: "acceptable responses". What do I mean by this? Well, we all know that one of the strengths of REST is that the interface is 'well-known' so we don't need to use WSDL (or equivalent) to bind to a RESTful endpoint - we just need a URL. As an example, we can quite happily have the following Request/Response to do a directory listing: --> GET /foo/ <-- 200 OK Content-Type: text/html <html> <body> <a href="dir1/">dir1</a> <a href="dir2/">dir2</a> <a href="file.txt/">Text File</a> <a href="file.html/">HTML File</a> </body> </html> Now, that's all fine and uncontroversial. A cient can parse the response representation, discover the text/html content type, load the body into a DOM and select each href - perhaps with the XPath //@href This also allows the response representation to be loaded in a browser and viewed by a human. However... What if the client is a robot? The above example text/html representation will probably be fine for most robots, such as search crawlers. But I wonder if this is really nothing more clever than screen-scraping. (Note that this approach would treat all href URLs the same) The obvious answer to this is to support multiple representations. In the example above, the request implicitly carries an Accept: */* header value. However, a robot might make the following request instead: --> GET /foo/ Accept: application/rdf+xml <-- 200 OK Content-Type: application/rdf+xml <?xml version="1.0"?> <rdf:RDF> ... <rdf:RDF> Now the robot has obtained an RDF/XML representation of the directory contents. This is no longer simple screen-scraping, but semantically meaningful data. Again, this isn't terribly controversial. But note that we now have two alternate representations of the some resource and we have decided which is the default. Furthermore, there could be many other alternate representation formats - each as equally 'standard' as text/html or application/rdf+xml - such as text/plain or text/csv and so on. This means that any of the following requests might be acceptable: --> GET /foo/ Accept: text/plain --> GET /foo/ Accept: text/csv --> GET /foo/ Accept: text/html --> GET /foo/ Accept: application/rdf+xml This now draws me towards the crux of my question. How does a client discover the supported representations? The client might ping the resource with each known (to the client) Content-Type, in the following manner: --> HEAD /foo/ Accept: application/xml <-- 406 Not Acceptable --> HEAD /foo/ Accept: text/html <-- 200 OK Content-Type: application/rdf+xml This has several drawbacks to it (latency and inefficiency both jump to mind). Some of you may be thinking that 300 Multiple Choices is the answer. But I suspect that, for this issue, the problem becomes circular at this point (what format should the 300 response representation itself use?). Furthermore, this issue is implicit in many requests. Consider what the representation format should be for any of the following: 1. An OPTIONS request (what are the supported representation formats?) 2. A 406 Not Acceptable response (what are acceptable representation formats?) 3. A 300 Multiple Choices response At first blush, we might decide that what we need is a standard representation. I thought about that and decided that it makes little sense - mostly for pragmatic and practical reasons, but also because it seems to fall foul of the same anti-pattern as WSDL. Then ... I thought about how allowed methods are communicated: --> OPTIONS /foo/ <-- 200 OK Allow: GET, HEAD, POST, PUT, DELETE I wonder if the answer to the dilemna outlined above is that we need an equivalent HTTP header for Content-Types? Consider the following: --> OPTIONS /foo/ <-- 200 OK Allow: GET, HEAD, POST, PUT, DELETE Acceptable: text/plain, text/html, application/rdf+xml --> GET /foo/ Accept: model/* <-- 406 Not Acceptable Acceptable: text/plain, text/html, application/rdf+xml This seems clean and elegant to me. What do you all think? Regards, Alan Dean http://thoughtpad.net/who/alan-dean/
Alan Dean wrote: > I would like some feedback on something that I've been thinking hard > about recently: "acceptable responses". > > What do I mean by this? Well, we all know that one of the strengths > of REST is that the interface is 'well-known' so we don't need to use > WSDL (or equivalent) to bind to a RESTful endpoint - we just need a > URL. > > As an example, we can quite happily have the following > Request/Response to do a directory listing: > > --> > GET /foo/ > > <-- > 200 OK > Content-Type: text/html > > <html> > <body> > <a href="dir1/">dir1</a> > <a href="dir2/">dir2</a> > <a href="file.txt/">Text File</a> > <a href="file.html/">HTML File</a> > </body> > </html> > > [...] > > What if the client is a robot? > > The above example text/html representation will probably be fine for > most robots, such as search crawlers. But I wonder if this is really > nothing more clever than screen-scraping. (Note that this approach > would treat all href URLs the same) > > [...] > > <?xml version="1.0"?> > <rdf:RDF> > ... > <rdf:RDF> > > Now the robot has obtained an RDF/XML representation of the directory > contents. This is no longer simple screen-scraping, but semantically > meaningful data. Just because the data is more semantically precise, doesn't mean a client will know what to do with it. If a client can't handle RDF it's in no better shape than the one that can't handle the html. . And if the html used a documented xdmp+microformat combo, it's more or less the same work (structurally). > I wonder if the answer to the dilemna outlined above is that we need > an equivalent HTTP header for Content-Types? Consider the following: I think the world is moving towards providing different URLs for different formats: /foo/export.rdf /foo/export.html /foo/export.csv I'm not really sure if this is a good idea. It has the "advantage" of fixing your issue by avoiding all conneg machinery altogether - I suspect this is why it's popular. I haven't heard an architectural argument presented as to why it's a bad idea. cheers Bill > > Again, this isn't terribly controversial. But note that we now have > two alternate representations of the some resource and we have > decided which is the default. Furthermore, there could be many other > alternate representation formats - each as equally 'standard' as > text/html or application/rdf+xml - such as text/plain or text/csv and > so on. > > This means that any of the following requests might be acceptable: > > --> > GET /foo/ > Accept: text/plain > > --> > GET /foo/ > Accept: text/csv > > --> > GET /foo/ > Accept: text/html > > --> > GET /foo/ > Accept: application/rdf+xml > > This now draws me towards the crux of my question. > > How does a client discover the supported representations? > > The client might ping the resource with each known (to the client) > Content-Type, in the following manner: > > --> > HEAD /foo/ > Accept: application/xml > > <-- > 406 Not Acceptable > > --> > HEAD /foo/ > Accept: text/html > > <-- > 200 OK > Content-Type: application/rdf+xml > > This has several drawbacks to it (latency and inefficiency both jump > to mind). > > Some of you may be thinking that 300 Multiple Choices is the answer. > But I suspect that, for this issue, the problem becomes circular at > this point (what format should the 300 response representation itself > use?). > > Furthermore, this issue is implicit in many requests. Consider what > the representation format should be for any of the following: > > 1. An OPTIONS request (what are the supported representation formats?) > 2. A 406 Not Acceptable response (what are acceptable representation > formats?) > 3. A 300 Multiple Choices response > > At first blush, we might decide that what we need is a standard > representation. I thought about that and decided that it makes little > sense - mostly for pragmatic and practical reasons, but also because > it seems to fall foul of the same anti-pattern as WSDL. > > Then ... I thought about how allowed methods are communicated: > > --> > OPTIONS /foo/ > > <-- > 200 OK > Allow: GET, HEAD, POST, PUT, DELETE > > I wonder if the answer to the dilemna outlined above is that we need > an equivalent HTTP header for Content-Types? Consider the following: > > --> > OPTIONS /foo/ > > <-- > 200 OK > Allow: GET, HEAD, POST, PUT, DELETE > Acceptable: text/plain, text/html, application/rdf+xml > > --> > GET /foo/ > Accept: model/* > > <-- > 406 Not Acceptable > Acceptable: text/plain, text/html, application/rdf+xml > > This seems clean and elegant to me. What do you all think? > > Regards, > Alan Dean > http://thoughtpad.net/who/alan-dean/ <http://thoughtpad.net/who/alan-dean/> > >
Bill de hOra <bill@...> writes: > I think the world is moving towards providing different URLs for > different formats: > > /foo/export.rdf > /foo/export.html > /foo/export.csv > > I'm not really sure if this is a good idea. It has the "advantage" of > fixing your issue by avoiding all conneg machinery altogether - I > suspect this is why it's popular. I haven't heard an architectural > argument presented as to why it's a bad idea. I don't like it. But we do need better user tools for dealing with content neg. Right now, all a user agent can say is "I accept X,Y and possibly everything" which isn't good enough. What I want to be able to say, at least some of the time is, I want JSON. -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk
On 3/5/07, Bill de hOra <bill@...> wrote: > > > > Now the robot has obtained an RDF/XML representation of the directory > > contents. This is no longer simple screen-scraping, but semantically > > meaningful data. > > Just because the data is more semantically precise, doesn't mean a > client will know what to do with it. I think it is reasonable to assume that a client who has specifically requested RDF will know how to handle it. > I think the world is moving towards providing different URLs for > different formats: > > /foo/export.rdf > /foo/export.html > /foo/export.csv > > I'm not really sure if this is a good idea. It has the "advantage" of > fixing your issue by avoiding all conneg machinery altogether - I > suspect this is why it's popular. I haven't heard an architectural > argument presented as to why it's a bad idea. "It has been very tempting from time to time for people to write software in which a client will look at a string such as ".html" on the end of an identifier, and come to a conclusion that it might be hypertext markup file when dereferenced. But these thoughts of breaking of the rule could lead to a broken architecture in which the generality of URIs is something one can no longer depend on." (Tim Berners-Lee) http://www.w3.org/DesignIssues/Axioms.html#opaque "Agents making use of URIs SHOULD NOT attempt to infer properties of the referenced resource." (W3C) http://www.w3.org/TR/webarch/#uri-opacity Regards, Alan
Hi Alan, Nice post. This is indeed an issue. I like the idea to leverage OPTIONS as a way to obtain metadata on the resource. One option could be to return a WADL snippet describing only the resource that is the target of the OPTIONS method. Otherwise, your proposition to use new headers is even better. It is important to note that there are multiple aspects defining what is acceptable, the media type, the language, etc. If we add an "Acceptable:" header, we should also add "Acceptable-Language", "Acceptable-Charset" and "Acceptable-Encoding". Actually, as the "Accept-Ranges" response headers already exists for similar purposes, we could simply reuse the request headers as response headers: "Accept" to list acceptable media types, "Accept-Language", "Accept-Charset" and "Accept-Encoding" for other metadata. Regards, Jerome Alan Dean a �crit : > > > I apologise up front for what will be a rather lengthy post to the > group. > > I would like some feedback on something that I've been thinking hard > about recently: "acceptable responses". > > What do I mean by this? Well, we all know that one of the strengths > of REST is that the interface is 'well-known' so we don't need to use > WSDL (or equivalent) to bind to a RESTful endpoint - we just need a > URL. > > As an example, we can quite happily have the following > Request/Response to do a directory listing: > > --> > GET /foo/ > > <-- > 200 OK > Content-Type: text/html > > <html> > <body> > <a href="dir1/">dir1</a> > <a href="dir2/">dir2</a> > <a href="file.txt/">Text File</a> > <a href="file.html/">HTML File</a> > </body> > </html> > > Now, that's all fine and uncontroversial. A cient can parse the > response representation, discover the text/html content type, load > the body into a DOM and select each href - perhaps with the > XPath //@href > > This also allows the response representation to be loaded in a > browser and viewed by a human. > > However... > > What if the client is a robot? > > The above example text/html representation will probably be fine for > most robots, such as search crawlers. But I wonder if this is really > nothing more clever than screen-scraping. (Note that this approach > would treat all href URLs the same) > > The obvious answer to this is to support multiple representations. In > the example above, the request implicitly carries an Accept: */* > header value. However, a robot might make the following request > instead: > > --> > GET /foo/ > Accept: application/rdf+xml > > <-- > 200 OK > Content-Type: application/rdf+xml > > <?xml version="1.0"?> > <rdf:RDF> > ... > <rdf:RDF> > > Now the robot has obtained an RDF/XML representation of the directory > contents. This is no longer simple screen-scraping, but semantically > meaningful data. > > Again, this isn't terribly controversial. But note that we now have > two alternate representations of the some resource and we have > decided which is the default. Furthermore, there could be many other > alternate representation formats - each as equally 'standard' as > text/html or application/rdf+xml - such as text/plain or text/csv and > so on. > > This means that any of the following requests might be acceptable: > > --> > GET /foo/ > Accept: text/plain > > --> > GET /foo/ > Accept: text/csv > > --> > GET /foo/ > Accept: text/html > > --> > GET /foo/ > Accept: application/rdf+xml > > This now draws me towards the crux of my question. > > How does a client discover the supported representations? > > The client might ping the resource with each known (to the client) > Content-Type, in the following manner: > > --> > HEAD /foo/ > Accept: application/xml > > <-- > 406 Not Acceptable > > --> > HEAD /foo/ > Accept: text/html > > <-- > 200 OK > Content-Type: application/rdf+xml > > This has several drawbacks to it (latency and inefficiency both jump > to mind). > > Some of you may be thinking that 300 Multiple Choices is the answer. > But I suspect that, for this issue, the problem becomes circular at > this point (what format should the 300 response representation itself > use?). > > Furthermore, this issue is implicit in many requests. Consider what > the representation format should be for any of the following: > > 1. An OPTIONS request (what are the supported representation formats?) > 2. A 406 Not Acceptable response (what are acceptable representation > formats?) > 3. A 300 Multiple Choices response > > At first blush, we might decide that what we need is a standard > representation. I thought about that and decided that it makes little > sense - mostly for pragmatic and practical reasons, but also because > it seems to fall foul of the same anti-pattern as WSDL. > > Then ... I thought about how allowed methods are communicated: > > --> > OPTIONS /foo/ > > <-- > 200 OK > Allow: GET, HEAD, POST, PUT, DELETE > > I wonder if the answer to the dilemna outlined above is that we need > an equivalent HTTP header for Content-Types? Consider the following: > > --> > OPTIONS /foo/ > > <-- > 200 OK > Allow: GET, HEAD, POST, PUT, DELETE > Acceptable: text/plain, text/html, application/rdf+xml > > --> > GET /foo/ > Accept: model/* > > <-- > 406 Not Acceptable > Acceptable: text/plain, text/html, application/rdf+xml > > This seems clean and elegant to me. What do you all think? > > Regards, > Alan Dean > http://thoughtpad.net/who/alan-dean/ <http://thoughtpad.net/who/alan-dean/> > >
On 3/5/07, Jerome Louvel <contact@...> wrote: > Hi Alan, > > Nice post. This is indeed an issue. I like the idea to leverage OPTIONS > as a way to obtain metadata on the resource. One option could be to > return a WADL snippet describing only the resource that is the target of > the OPTIONS method. If you made the following request, yes you could: --> OPTIONS /foo/ Accept: application/vnd.sun.wadl+xml (one of uglier MIME types) > > Otherwise, your proposition to use new headers is even better. It is > important to note that there are multiple aspects defining what is > acceptable, the media type, the language, etc. If we add an > "Acceptable:" header, we should also add "Acceptable-Language", > "Acceptable-Charset" and "Acceptable-Encoding". You are correct, logically the same principal applies to the other Accept... headers > > Actually, as the "Accept-Ranges" response headers already exists for > similar purposes, we could simply reuse the request headers as response > headers: "Accept" to list acceptable media types, "Accept-Language", > "Accept-Charset" and "Accept-Encoding" for other metadata. I'm a little lost on the Accept-Ranges point. I thought that the purpose of that header was to retrieve a subset of the desired representation. http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.5 Regards, Alan
On re-using Accept in the response: For what its worth, SIP uses an Accept in the OPTIONS response to indicate what types the resource itself can accept in requests. It makes sense that this would mean the same thing in the context of HTTP. ie. this is what can be POSTed or PUT to the resource. --- In rest-discuss@yahoogroups.com, Jerome Louvel <contact@...> wrote: > > Hi Alan, > > Nice post. This is indeed an issue. I like the idea to leverage OPTIONS > as a way to obtain metadata on the resource. One option could be to > return a WADL snippet describing only the resource that is the target of > the OPTIONS method. > > Otherwise, your proposition to use new headers is even better. It is > important to note that there are multiple aspects defining what is > acceptable, the media type, the language, etc. If we add an > "Acceptable:" header, we should also add "Acceptable-Language", > "Acceptable-Charset" and "Acceptable-Encoding". > > Actually, as the "Accept-Ranges" response headers already exists for > similar purposes, we could simply reuse the request headers as response > headers: "Accept" to list acceptable media types, "Accept-Language", > "Accept-Charset" and "Accept-Encoding" for other metadata. > > Regards, > Jerome > > > Alan Dean a crit : > > > > > > I apologise up front for what will be a rather lengthy post to the > > group. > > > > I would like some feedback on something that I've been thinking hard > > about recently: "acceptable responses". > > > > What do I mean by this? Well, we all know that one of the strengths > > of REST is that the interface is 'well-known' so we don't need to use > > WSDL (or equivalent) to bind to a RESTful endpoint - we just need a > > URL. > > > > As an example, we can quite happily have the following > > Request/Response to do a directory listing: > > > > --> > > GET /foo/ > > > > <-- > > 200 OK > > Content-Type: text/html > > > > <html> > > <body> > > <a href="dir1/">dir1</a> > > <a href="dir2/">dir2</a> > > <a href="file.txt/">Text File</a> > > <a href="file.html/">HTML File</a> > > </body> > > </html> > > > > Now, that's all fine and uncontroversial. A cient can parse the > > response representation, discover the text/html content type, load > > the body into a DOM and select each href - perhaps with the > > XPath //@href > > > > This also allows the response representation to be loaded in a > > browser and viewed by a human. > > > > However... > > > > What if the client is a robot? > > > > The above example text/html representation will probably be fine for > > most robots, such as search crawlers. But I wonder if this is really > > nothing more clever than screen-scraping. (Note that this approach > > would treat all href URLs the same) > > > > The obvious answer to this is to support multiple representations. In > > the example above, the request implicitly carries an Accept: */* > > header value. However, a robot might make the following request > > instead: > > > > --> > > GET /foo/ > > Accept: application/rdf+xml > > > > <-- > > 200 OK > > Content-Type: application/rdf+xml > > > > <?xml version="1.0"?> > > <rdf:RDF> > > ... > > <rdf:RDF> > > > > Now the robot has obtained an RDF/XML representation of the directory > > contents. This is no longer simple screen-scraping, but semantically > > meaningful data. > > > > Again, this isn't terribly controversial. But note that we now have > > two alternate representations of the some resource and we have > > decided which is the default. Furthermore, there could be many other > > alternate representation formats - each as equally 'standard' as > > text/html or application/rdf+xml - such as text/plain or text/csv and > > so on. > > > > This means that any of the following requests might be acceptable: > > > > --> > > GET /foo/ > > Accept: text/plain > > > > --> > > GET /foo/ > > Accept: text/csv > > > > --> > > GET /foo/ > > Accept: text/html > > > > --> > > GET /foo/ > > Accept: application/rdf+xml > > > > This now draws me towards the crux of my question. > > > > How does a client discover the supported representations? > > > > The client might ping the resource with each known (to the client) > > Content-Type, in the following manner: > > > > --> > > HEAD /foo/ > > Accept: application/xml > > > > <-- > > 406 Not Acceptable > > > > --> > > HEAD /foo/ > > Accept: text/html > > > > <-- > > 200 OK > > Content-Type: application/rdf+xml > > > > This has several drawbacks to it (latency and inefficiency both jump > > to mind). > > > > Some of you may be thinking that 300 Multiple Choices is the answer. > > But I suspect that, for this issue, the problem becomes circular at > > this point (what format should the 300 response representation itself > > use?). > > > > Furthermore, this issue is implicit in many requests. Consider what > > the representation format should be for any of the following: > > > > 1. An OPTIONS request (what are the supported representation formats?) > > 2. A 406 Not Acceptable response (what are acceptable representation > > formats?) > > 3. A 300 Multiple Choices response > > > > At first blush, we might decide that what we need is a standard > > representation. I thought about that and decided that it makes little > > sense - mostly for pragmatic and practical reasons, but also because > > it seems to fall foul of the same anti-pattern as WSDL. > > > > Then ... I thought about how allowed methods are communicated: > > > > --> > > OPTIONS /foo/ > > > > <-- > > 200 OK > > Allow: GET, HEAD, POST, PUT, DELETE > > > > I wonder if the answer to the dilemna outlined above is that we need > > an equivalent HTTP header for Content-Types? Consider the following: > > > > --> > > OPTIONS /foo/ > > > > <-- > > 200 OK > > Allow: GET, HEAD, POST, PUT, DELETE > > Acceptable: text/plain, text/html, application/rdf+xml > > > > --> > > GET /foo/ > > Accept: model/* > > > > <-- > > 406 Not Acceptable > > Acceptable: text/plain, text/html, application/rdf+xml > > > > This seems clean and elegant to me. What do you all think? > > > > Regards, > > Alan Dean > > http://thoughtpad.net/who/alan-dean/ <http://thoughtpad.net/who/alan-dean/> > > > > >
On 3/5/07, wahbedahbe <andrew.wahbe@...> wrote: > > On re-using Accept in the response: > > For what its worth, SIP uses an Accept in the OPTIONS response to > indicate what types the resource itself can accept in requests. > > It makes sense that this would mean the same thing in the context of > HTTP. ie. this is what can be POSTed or PUT to the resource. Just to be clear - the Accept header is defined as a Request header and so cannot be used in the response. My idea is to use a new Response header called Acceptable to provide a list of supported Content-Types to the client. (Of course, unless it was formalised - it should be X-Acceptable) Alan
Understood. I was addressing Jerome's proposal to re-use Accept in the OPTIONS response. I was just saying that if an extension permitted this it would make sense for it to represent the type of data that could be submitted to the resource in a request. --- In rest-discuss@yahoogroups.com, "Alan Dean" <alan.dean@...> wrote: > > On 3/5/07, wahbedahbe <andrew.wahbe@...> wrote: > > > > On re-using Accept in the response: > > > > For what its worth, SIP uses an Accept in the OPTIONS response to > > indicate what types the resource itself can accept in requests. > > > > It makes sense that this would mean the same thing in the context of > > HTTP. ie. this is what can be POSTed or PUT to the resource. > > Just to be clear - the Accept header is defined as a Request header > and so cannot be used in the response. > > My idea is to use a new Response header called Acceptable to provide a > list of supported Content-Types to the client. > > (Of course, unless it was formalised - it should be X-Acceptable) > > Alan >
Hey Alan, On 3/5/07, Alan Dean <alan.dean@...> wrote: > This seems clean and elegant to me. What do you all think? We've been down this path before: "To do negotiation right, the client needs to be aware of all the alternatives *and* what it should use as a bookmark. In HTTP/1.1, all we managed to standardize is 300 and Vary. We needed Alternates and Link to make it work." -Roy [1] Some links related to the Alternates and Link headers here: [2] AFAIK, the reason we don't have the Alternates and Link headers is because of the existence of an HTML equivalent [3][4] [1] http://tech.groups.yahoo.com/group/rest-discuss/message/5916 [2] http://tech.groups.yahoo.com/group/rest-discuss/message/5975 [3] http://www.w3.org/TR/html4/struct/links.html#h-12.3.3 [4] http://www.w3.org/TR/html4/types.html#h-6.12 Sandeep Shetty http://sandeep.shetty.in/
Alan Dean wrote:
> If you made the following request, yes you could:
>
> -->
> OPTIONS /foo/
> Accept: application/vnd.sun.wadl+xml
>
> (one of uglier MIME types)
Hopefully it will be registered oneday to become "application/wadl+xml"
or something simpler :-)
[...]
> > Actually, as the "Accept-Ranges" response headers already exists for
> > similar purposes, we could simply reuse the request headers as response
> > headers: "Accept" to list acceptable media types, "Accept-Language",
> > "Accept-Charset" and "Accept-Encoding" for other metadata.
>
> I'm a little lost on the Accept-Ranges point. I thought that the
> purpose of that header was to retrieve a subset of the desired
> representation.
>
> http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.5
My point was simply about headers naming. It just wanted to illustrate
that "Accept-Ranges" is a response header that exposes *acceptable*
ranges exposed by a server.
Therefore, in order to be consistent, it would make sense to reuse the
"Accept", "Accept-Language", etc. request headers as response headers
instead of defining new ones ("Acceptable", etc.). Of course, this
should become formally described in an updated HTTP specification.
Best,
Jerome
Alan Dean wrote: > On 3/5/07, Bill de hOra <bill@...> wrote: >> > >> > Now the robot has obtained an RDF/XML representation of the directory >> > contents. This is no longer simple screen-scraping, but semantically >> > meaningful data. >> >> Just because the data is more semantically precise, doesn't mean a >> client will know what to do with it. > > I think it is reasonable to assume that a client who has specifically > requested RDF will know how to handle it. I don't. Your text/html with file paths is no different from application/rdf+xml that contains OWL. RDF has the same problem as text/* one level up. > "It has been very tempting from time to time for people to write > software in which a client will look at a string such as ".html" on > the end of an identifier, and come to a conclusion that it might be > hypertext markup file when dereferenced. But these thoughts of > breaking of the rule could lead to a broken architecture in which the > generality of URIs is something one can no longer depend on." > > (Tim Berners-Lee) > http://www.w3.org/DesignIssues/Axioms.html#opaque > > "Agents making use of URIs SHOULD NOT attempt to infer properties of > the referenced resource." I wasn't talking about guessing URIs; the URIs I had in mind tend to be documented - e.g. Zimbra's. I was talking about doing an end run around conneg altogether, hence avoiding your problem. It's far from clear that conneg has worked out in practice. I don't know whether it's because it's not properly supported or evenly deployed (eg like PUT) or is just something better left on the whiteboard. cheers Bill
Nic James Ferrier wrote: > What I want to be able to say, at least some of the time is, I want > JSON. I suspect if you could say that, servers would respond with see other. cheers Bill
On Mar 4, 2007, at 9:03 PM, Nic James Ferrier wrote: > Stefan Tilkov <stefan.tilkov@...> writes: > > > I meant that I need the auth headers for authenticating the user, > and > > another means to identify (or authenticate) the "agent" (in this > > case, B or C). > > Right. I was intimating that the user needs to create a special set of > authentication details to give to agents. > > >> Indeed, this is just the model I'm trying to build for OpenID > >> authentication with http://prooveme.com > >> > > > > I'm not sure I really understand how prooveme.com works; the FAQ did > > not really clarify it for me. If you can explain it in this context, > > it would be very much appreciated, otherwise I'm happy to do some > > reading on my own first. > > prooveme.com is an OpenID provider based on client certs. You attempt > to login to an OpenID site with one of our IDs (or a delegate) and the > authentication is done with your certificate. > So if I open up an account with prooveme.com, do I get one or more OpenID URIs? > But the really clever bit is what we're working on now: we can let you > create more certificates for specific purposes and give them away to > other entities. > > For example, if you want to let flikr login to blogger then you create > a certificate that allows authentication only to blogger and then you > give that certificate to flikr. When you want to stop flikr doing that > you can revoke the certificate. This idea - being able to grant access for specific purposes - is very cool. But for it to work, Flickr would have to have a business relationshop with prooveme.com, right? Where is the user directory? Is Flickr supposed to have the client public keys in its own store somewhere? I'm confused, the reason is probably that I never did client-side certificate-based authentication in earnest yet. > > This OpenID multi-access model doesn't have to use certificates, but > it does require your ID and at least one secure token of > information. Certificates could be used... usernames and passwords > could be used. > > Of course, we (the prooveme.com team) have to get flikr to agree to > use client certs to authenticate... but we think we'll be able to do > that. > So would Flickr have to do something different for prooveme.com than for someothersecuritysite.com? Or just support standard certificate- based auth? > Does that explain it? > > I think this is on-topic btw. It's all part of (quite) RESTfull > APIs. It certainly seems a lot better than sticking crypto in a > document being pushed around over some crazy WS-* protocol. > I agree - in fact, I think REST-style exposure of resources and verbs is an excellent match for granting privileges, i.e. if some intermediary asks "B has requested the right to PUT to /customers/13. Do you want to allow this?" this is quite a bit more meaningful than "C has asked to POST to /somewebservice; I can't decrypt the body, though, so don't ask for details. Agree?". Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ > -- > Nic Ferrier > ---------------------------------------------------------- > Need a linux/java/python/web hacker? I'm in need of work! > ---------------------------------------------------------- > http://www.tapsellferrier.co.uk > >
Stefan Tilkov <stefan.tilkov@...> writes: > So if I open up an account with prooveme.com, do I get one or more > OpenID URIs? One. But OpenID can be delegated of course. And we might make it possible to alias OpenIDs when you present the certificate. We can do that. No one's asked for it yet. > This idea - being able to grant access for specific purposes - is > very cool. But for it to work, Flickr would have to have a business > relationshop with prooveme.com, right? Where is the user directory? > Is Flickr supposed to have the client public keys in its own store > somewhere? Yes. But we think it would be better if there were a bunch of providers doing what we're doing (there is already http://certifi.ca who are at least doing certificated OpenID) and then we can invent standard protocols for moving delegate certificates around. So, the challenge is to slightly abstract the token to the point where it can be passed around but then used in an implementation specific way. I think that's doable and that flikr and blogger and gmail and whoever would support such an extension of OpenID. > I'm confused, the reason is probably that I never did client-side > certificate-based authentication in earnest yet. They are difficult and confusing. But I think this is a model that can solve all these problems of delegated authority. And after it's setup people can pretty much stop thinking about client certificates and just think about delegation. > So would Flickr have to do something different for prooveme.com than > for someothersecuritysite.com? Or just support standard certificate- > based auth? Just support standard certificate based auth. I really think certificates at least should be required from an OpenID provider. > I agree - in fact, I think REST-style exposure of resources and verbs > is an excellent match for granting privileges, i.e. if some > intermediary asks "B has requested the right to PUT to /customers/13. > Do you want to allow this?" this is quite a bit more meaningful than > "C has asked to POST to /somewebservice; I can't decrypt the body, > though, so don't ask for details. Agree?". Yes. Absolutely. And that's the level we're talking about. -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk
Mike Schinkel wrote: > I have no problem with sharing ideas. But just as Roy Fielding does not > believe it is a good thing for a GET to change state, I do not believe it is > a good thing to have lots of libraries and frameworks offered with what > amounts to arbitrarily differences. I think it should be a best practice to > have *consideration* for prior art and not to duplicate prior art if there > are no obvious benefits. I think the unnecessary fragmentation of libraries > and frameworks holds back progress. > I believe that at this early state nothing is obvious. We do not fully know what the benefits and disadvantages of each approach are. Many experiments must be performed and systems built before we even realize what are the right questions to be asking, much less what the answers are. Currently all these frameworks have pretty close to zero adoption. Thus it is a wonderful time to try many different things. Fragmentation is irrelevant at this low level of user interest. We want to fragment as much as we can before people notice the space. Once they do, we'll want to be ready with best practices, standard frameworks, and the like. However we can only do that by trying lots of different things now. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Hi, I've got another nitty-gritty issue to get feedback from the group on. I have been thinking about RESTful move operations. Obviously, a RESTful system should not require a MOVE verb, and the operation can be carried out as follows (assume that the representation already exists): --> GET /foo.txt <-- HTTP/1.1 200 OK ETag: "hash" Content-Type: text/plain Content-Length: 16 this is the body --> PUT /bar.txt If-None-Match: * Content-Type: text/plain Content-Length: 16 this is the body <-- HTTP/1.1 201 Created --> DELETE /foo.txt If-Match: "hash" <-- HTTP/1.1 204 No Content That's fine. The question I am wrestling with at the moment is "how does the client tell the server to redirect requests from /foo.txt to /bar.txt (either temporarily or permanently)?" I had a look into the Atom Publishing Protocol to see if this is a matter that they have thought about, but I can't find any reference. I had feedback that you might use the Location header - but I don't much like the feel of that as it is a response header. Where my thinking is right now is that the server could support a PUT of Content-Type message/http, in the following manner: --> PUT /foo.txt Content-Type: message/http Content-Length: 49 HTTP/1.1 301 Moved Permanently Location: /bar.txt <-- HTTP/1.1 201 Created --> GET /foo.txt <-- HTTP/1.1 301 Moved Permanently Location: /bar.txt What do you think? Regards, Alan Dean http://thoughtpad.net/who/alan-dean/
Alan Dean schrieb: > > > Hi, > > I've got another nitty-gritty issue to get feedback from the group on. > > I have been thinking about RESTful move operations. Obviously, a > RESTful system should not require a MOVE verb, and the > operation can be carried out as follows (assume that the > representation already exists): Can be, but server will lose information about the identity (think version history), and of course it's not efficient for large resources. Why not use MOVE? > --> > GET /foo.txt > > <-- > HTTP/1.1 200 OK > ETag: "hash" > Content-Type: text/plain > Content-Length: 16 > > this is the body > > --> > PUT /bar.txt > If-None-Match: * > Content-Type: text/plain > Content-Length: 16 > > this is the body > > <-- > HTTP/1.1 201 Created > > --> > DELETE /foo.txt > If-Match: "hash" > > <-- > HTTP/1.1 204 No Content > > That's fine. The question I am wrestling with at the moment is "how > does the client tell the server to redirect requests from /foo.txt > to /bar.txt (either temporarily or permanently) ?" > > I had a look into the Atom Publishing Protocol to see if this is a > matter that they have thought about, but I can't find any reference. > > I had feedback that you might use the Location header - but I don't > much like the feel of that as it is a response header. > > Where my thinking is right now is that the server could support a PUT > of Content-Type message/http, in the following manner: > > --> > PUT /foo.txt > Content-Type: message/http > Content-Length: 49 > > HTTP/1.1 301 Moved Permanently > Location: /bar.txt > > <-- > HTTP/1.1 201 Created > > --> > GET /foo.txt > > <-- > HTTP/1.1 301 Moved Permanently > Location: /bar.txt > > What do you think? <http://greenbytes.de/tech/webdav/rfc4437.html#METHOD_MKREDIRECTREF>. Best regards, Julian
On 3/8/07, Julian Reschke <julian.reschke@...> wrote: > Alan Dean schrieb: > > I've got another nitty-gritty issue to get feedback from the group on. > > > > I have been thinking about RESTful move operations. Obviously, a > > RESTful system should not require a MOVE verb, and the > > operation can be carried out as follows (assume that the > > representation already exists): > > Can be, but server will lose information about the identity (think > version history), and of course it's not efficient for large resources. > Why not use MOVE? Two reasons: 1) MOVE isn't a REST method. 2) Even if it were, you would still face the issue of how the client should inform the server that a redirect is required. For example, it is easy to imagine a variant of my example where the representation is moved to a different domain. There a MOVE would be of no use, as the original server has no control over the destination server. (imagine a blogger moving entries from one blog provider to another, but wanting redirects from the old addresses to the new). Regards, Alan
Alan Dean schrieb: > > > On 3/8/07, Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> wrote: > > Alan Dean schrieb: > > > I've got another nitty-gritty issue to get feedback from the group on. > > > > > > I have been thinking about RESTful move operations. Obviously, a > > > RESTful system should not require a MOVE verb, and the > > > operation can be carried out as follows (assume that the > > > representation already exists): > > > > Can be, but server will lose information about the identity (think > > version history), and of course it's not efficient for large resources. > > Why not use MOVE? > > Two reasons: > > 1) MOVE isn't a REST method. Can you point me to the definition of a REST method? > 2) Even if it were, you would still face the issue of how the client > should inform the server that a redirect is required. For example, it > is easy to imagine a variant of my example where the representation is > moved to a different domain. There a MOVE would be of no use, as the > original server has no control over the destination server. (imagine a > blogger moving entries from one blog provider to another, but wanting > redirects from the old addresses to the new). That's why I pointed you to MKREDIRECTEF. Best regards, Julian
On Thu, 2007-03-08 at 20:21 +0000, Alan Dean wrote:
> Two reasons:
>
> 1) MOVE isn't a REST method.
REST doesn't have methods; only HTTP does.
Some HTTP extension — such as WebDAV — might have a MOVE method, and/or
a MKREDIRECTREF method, as Julian points out.
> 2) Even if it were, you would still face the issue of how the client
> should inform the server that a redirect is required. For example, it
> is easy to imagine a variant of my example where the representation is
> moved to a different domain. There a MOVE would be of no use, as the
> original server has no control over the destination server. (imagine a
> blogger moving entries from one blog provider to another, but wanting
> redirects from the old addresses to the new).
MOVE /a
Location: /b
MOVE /a
Location: http://other.srv/b
Seems pretty straightforward.
Otherwise, I think this "move" operation is probably best as a PUT or
POST against the being-moved resource.
C: GET /a
S: <moveable href="./move">
S: <field name="redirect_to" type="relative_or_absolute_url"/>
S: </moveable>
C: POST /a/move # Or PUT, if you like...
C: redirect_to=/b
S: 201 Created; Location: /b
C: GET /a
S: 301; Location: /b
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org;echo ${a}@${b}
Josh Sled schrieb: > MOVE /a > Location: /b > > MOVE /a > Location: http://other.srv/b s/Location/Destination/ > Seems pretty straightforward. > > > Otherwise, I think this "move" operation is probably best as a PUT or > POST against the being-moved resource. > > C: GET /a > S: <moveable href="./move"> > S: <field name="redirect_to" type="relative_or_absolute_url"/> > S: </moveable> > > > C: POST /a/move # Or PUT, if you like... > C: redirect_to=/b > S: 201 Created; Location: /b > > C: GET /a > S: 301; Location: /b You could do that, although I'd argue that using PUT would be incorrect here -- after all, you're not setting a new representation of the resource. So, at the end of the day, where's the advantage over using WebDAV, except for the problem that WebDAV seems to suffer from an "not invented here" image over here? Best regards, Julian
Julian Reschke <julian.reschke@...> writes: > So, at the end of the day, where's the advantage over using WebDAV, > except for the problem that WebDAV seems to suffer from an "not invented > here" image over here? Because browsers don't have good webdav clients, lots of languages don't have good webdav clients, webdav clients are hardly ever installed and are bloody hard to write, etc... webdav is not REST because REST is the HTTP verbs. [sticks out tongue and make "nya" noise] -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk PS sorry I may be spending too much time on http://jyte.com
On Thu, 2007-03-08 at 21:45 +0100, Julian Reschke wrote:
> So, at the end of the day, where's the advantage over using WebDAV,
> except for the problem that WebDAV seems to suffer from an "not invented
> here" image over here?
WebDAV seems to solve a bigger problem. I guess one could cherry pick
the MOVE/MKREDIRECTREF methods, call it "webdav-friendly" or something
and be done with it; I'm not sure what else that would pull in as
conceptual or technical dependencies...
It's just often easier to re-write something to solve the specific
problem at hand than it is to use an existing
technology/spec/toolkit/&c. Reminds me of the Robert Glass "fact" that
a 25% increase in problem complexity brings a 100% increase in solution
complexity.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org;echo ${a}@${b}
Nic James Ferrier schrieb: > Julian Reschke <julian.reschke@...> writes: > >> So, at the end of the day, where's the advantage over using WebDAV, >> except for the problem that WebDAV seems to suffer from an "not invented >> here" image over here? > > Because browsers don't have good webdav clients, lots of languages > don't have good webdav clients, webdav clients are hardly ever > installed and are bloody hard to write, etc... But in this case we're talking about a case where somebody wants to move a resource on a server, and obviously controls both ends. So unless the client HTTP library doesn't allow non-RFC2616 method names, there shouldn't be any problem. > webdav is not REST because REST is the HTTP verbs. Nope. > [sticks out tongue and make "nya" noise] Best regards, Julian
Well, this is interesting :-) I may have to revise an earlier comment I made. Julian Reschke said "Why not use MOVE?" I responded with two reasons, one of which was that MOVE isn't a REST method, which was callenged. So, off I toddled to Roy's dissertation and looked ... no definition. Indeed, there is nothing in the dissertation stipulating the four verbs we typically talk about around here. http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm Next, I toddled off to the HTTP spec and found that MOVE is specified as a core method: http://www.w3.org/Protocols/HTTP/1.1/spec.html#MOVE Now, whilst I have no desire to use WebDAV extension methods, I don't have any intrinsic objection to anything that is in the core of the protocol spec (which is why I support and leverage OPTIONS). Therefore, I must reconsider MOVE. What does the spec say? "The MOVE method requests that the resource identified by the Request-URI be moved to the location(s) given in the URI header field of the request. This method is equivalent to a COPY immediately followed by a DELETE, but enables both to occur within a single transaction." Note: this solves one of my concerns - namely that the Location header is a response header and therefore is not appropriate for use in a request. The spec says to use the URI header: http://www.w3.org/Protocols/HTTP/1.1/spec.html#URI-header So, the client is asking the server to carry out the move operation on it's behalf. That's fine, but how is the server to know if the movement is temporary or permanent, and thus which is the correct 3xx response to provide upon receipt of requests to the original URL? Next, this makes sense as a atomic operation when both the origin and destination URLs reside on the same host but what if the destination is on a different host? This was my blogger example given earlier. Is the client to expect the server to carry out that movement? In that scenario, how should the credentials be handled if the destination server issues a challenge? Assuming that the client does not trust any other agent with its' security credentials, we are back at stage 1 and MOVE does not solve the problem (even though I am now starting to think may be valid within a RESTful application). In which case, the solution could well still be a PUT carrying Content-Type: message/http (and possibly also supporting MOVE for operations entirely on the same host). Thanks for the discussion everyone - this is helping my creative juices. Regards, Alan
Alan, you're looking at an outdated draft. RFC2616 (HTTP/1.1) doesn't define MOVE. But RFC2518 (WebDAV) does. Best regards, Julian
Oops. Thanks for the correction. Alan On 3/8/07, Julian Reschke <julian.reschke@...> wrote: > Alan, > > you're looking at an outdated draft. RFC2616 (HTTP/1.1) doesn't define > MOVE. But RFC2518 (WebDAV) does. > > Best regards, Julian >
"Alan Dean" <alan.dean@...> writes:
> Well, this is interesting :-)
>
> I may have to revise an earlier comment I made.
>
> Julian Reschke said "Why not use MOVE?"
>
> I responded with two reasons, one of which was that MOVE isn't a REST
> method, which was callenged. So, off I toddled to Roy's dissertation
> and looked ... no definition. Indeed, there is nothing in the
> dissertation stipulating the four verbs we typically talk about around
> here.
>
> http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm
>
> Next, I toddled off to the HTTP spec and found that MOVE is specified
> as a core method:
>
> http://www.w3.org/Protocols/HTTP/1.1/spec.html#MOVE
That is not the HTTP spec. The HTTP spec is RFC2616 published by the
IETF and it says of method:
5.1.1 Method
The Method token indicates the method to be performed on the
resource identified by the Request-URI. The method is case-sensitive.
Method = "OPTIONS" ; Section 9.2
| "GET" ; Section 9.3
| "HEAD" ; Section 9.4
| "POST" ; Section 9.5
| "PUT" ; Section 9.6
| "DELETE" ; Section 9.7
| "TRACE" ; Section 9.8
| "CONNECT" ; Section 9.9
| extension-method
extension-method = token
You're right that Roy's dissertaion doesn't say anything about webdav.
We've only fairly recently had quite long discussions about webdav. I
would check the archives since the summer.
--
Nic Ferrier
----------------------------------------------------------
Need a linux/java/python/web hacker? I'm in need of work!
----------------------------------------------------------
http://www.tapsellferrier.co.uk
On 3/8/07, Nic James Ferrier <nferrier@...> wrote: > > That is not the HTTP spec. The HTTP spec is RFC2616 published by the > IETF and it says of method: Yes, my bad. > You're right that Roy's dissertaion doesn't say anything about webdav. > > We've only fairly recently had quite long discussions about webdav. I > would check the archives since the summer. Yes, I wasn't planning to head down the WebDAV route - this quotes Roy: http://www.mail-archive.com/microformats-rest@.../msg00189.html Going back to the move operation issue - does anyone see any inherent problem with my message/http idea? Alan
Nic James Ferrier schrieb: > You're right that Roy's dissertaion doesn't say anything about webdav. But then it also doesn't say anything about a *specific* set of methods. Believe it or not, the set of methods currently defined in RFC2616 is not the result of a big design plan; it just happens that those methods that weren't widely used at some point of time were taken out (probably due to the standards process), and were left for subsequent efforts to be picked up, such as WebDAV. Whether or not all WebDAV methods have been defined *well*, and whether they are all needed, is a separate discussion. But just because something doesn't appear in RFC2616 doesn't make it not "restful" per se. Consider LINK or PATCH, for example. > ... Best regards, Julian
Maybe I should be using Content-Type: application/http rather than message/http, see: http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html "The application/http type can be used to enclose a pipeline of one or more HTTP request or response messages (not intermixed)." (enclose being the operative word) Alan
Alan Dean schrieb: > > You're right that Roy's dissertaion doesn't say anything about webdav. > > > > We've only fairly recently had quite long discussions about webdav. I > > would check the archives since the summer. > > Yes, I wasn't planning to head down the WebDAV route - this quotes Roy: > > http://www.mail- archive.com/ microformats- rest@microformat > s.org/msg00189. html > <http://www.mail-archive.com/microformats-rest@.../msg00189.html> > > Going back to the move operation issue - does anyone see any inherent > problem with my message/http idea? Actually, that was about PROPFIND/PROPPATCH, not about the namespace operations (COPY/MOVE). Can we please stay on topic? I've seen arguments that COPY and MOVE could be done "more restful" by manipulating collection membership by updating collection representations, but as far as I can tell, that would still require two separate HTTP requests (one to the source collection, one to the target collection), and I really don't get how this is better than just telling the server to *move* the resource (yes, I'm aware of the problem that MOVE can only address either the source or the target of the operation, but that's an HTTP limitation, not a problem of WebDAV). Best regards, Julian
Julian Reschke <julian.reschke@...> writes: > > I've seen arguments that COPY and MOVE could be done "more restful" by > manipulating collection membership by updating collection > representations, but as far as I can tell, that would still require two > separate HTTP requests (one to the source collection, one to the target > collection), and I really don't get how this is better than just telling > the server to *move* the resource (yes, I'm aware of the problem that > MOVE can only address either the source or the target of the operation, > but that's an HTTP limitation, not a problem of WebDAV). But I think we've talked about the problem of MOVE over and over again haven't we? it's the same as the problem with PUT and DELETE. Many proxies don't support anything but POST, GET, HEAD and OPTIONS. Support for OPTIONS is extreemly sketchy, So MOVE is right out. Maybe a move would be as simple as a POST to a resource with a specified Location header expecting a 301; eg: POST /sourceresource HTTP/1.1 Location: /destresource => 301 Isn't that all you'd need? It would be supported by most clients, most servers and most intermediaries. -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk
On Mar 8, 2007, at 3:11 PM, Alan Dean wrote: > Yes, I wasn't planning to head down the WebDAV route - this quotes > Roy: > > http://www.mail-archive.com/microformats-rest@.../ > msg00189.html Not as well as <http://tech.groups.yahoo.com/group/rest-discuss/message/5874> > Going back to the move operation issue - does anyone see any inherent > problem with my message/http idea? Too many wasted bits. REST is not limited to four methods. It is limited to uniform methods that mean the same thing for every resource that allows them. HTTP/1.x, however, is not suitable for multi-target methods. ....Roy
On Mar 8, 2007, at 3:19 PM, Julian Reschke wrote: > Nic James Ferrier schrieb: > > You're right that Roy's dissertaion doesn't say anything about > webdav. > > But then it also doesn't say anything about a *specific* set of > methods. > > Believe it or not, the set of methods currently defined in RFC2616 is > not the result of a big design plan; it just happens that those > methods > that weren't widely used at some point of time were taken out > (probably > due to the standards process), and were left for subsequent efforts to > be picked up, such as WebDAV. Actually, that was the result of a big design plan, which occurred before the HTTP working group began in 1994. There were a lot more prototype methods in the hypertext "as implemented" spec. What you are probably thinking of is the set of methods in the original HTTP/1.1 proposal, which were later trimmed by the WG to fit the scope of what the WG considered could be standardized at that time. > Whether or not all WebDAV methods have been defined *well*, and > whether > they are all needed, is a separate discussion. But just because > something doesn't appear in RFC2616 doesn't make it not "restful" per > se. Consider LINK or PATCH, for example. Both of which are RESTful, as designed by me in the original HTTP/1.1 proposal. ....Roy
On Thu, Mar 08, 2007 at 04:30:32PM -0800, Roy T. Fielding wrote: > On Mar 8, 2007, at 3:19 PM, Julian Reschke wrote: > > Whether or not all WebDAV methods have been defined *well*, and > > whether > > they are all needed, is a separate discussion. But just because > > something doesn't appear in RFC2616 doesn't make it not "restful" per > > se. Consider LINK or PATCH, for example. > > Both of which are RESTful, as designed by me in the original > HTTP/1.1 proposal. OK, but stating the obvious pragmatic issue: A major benefit of REST is that things work with existing HTTP implementations - servers, proxies, caches, client libraries. But if a REST application uses methods other than those listed in HTTP/1.1, that will greatly reduce the number of existing implementations that will be able to work with that app. -- Paul Winkler http://www.slinkp.com
On Mar 8, 2007, at 5:00 PM, Paul Winkler wrote: > OK, but stating the obvious pragmatic issue: A major benefit of REST > is that things work with existing HTTP implementations - servers, > proxies, caches, client libraries. But if a REST application uses > methods other than those listed in HTTP/1.1, that will greatly reduce > the number of existing implementations that will be able to work with > that app. Any implementation that doesn't support arbitrary HTTP methods is not going to work in other, less obvious, ways as well. It is a lost cause to design systems around the lowest common denominator implementation -- just report errors when they are found and use a configurable workaround when necessary (and only when necessary). The only reason that Web proxies suck is because Web browsers are too stupid to report an error while working around them. ....Roy
Nic James Ferrier schrieb: > Julian Reschke <julian.reschke@...> writes: > >> I've seen arguments that COPY and MOVE could be done "more restful" by >> manipulating collection membership by updating collection >> representations, but as far as I can tell, that would still require two >> separate HTTP requests (one to the source collection, one to the target >> collection), and I really don't get how this is better than just telling >> the server to *move* the resource (yes, I'm aware of the problem that >> MOVE can only address either the source or the target of the operation, >> but that's an HTTP limitation, not a problem of WebDAV). > > But I think we've talked about the problem of MOVE over and over again > haven't we? it's the same as the problem with PUT and DELETE. > > Many proxies don't support anything but POST, GET, HEAD and > OPTIONS. Support for OPTIONS is extreemly sketchy, > > So MOVE is right out. The proposal that started this thread did a PUT+DELETE instead if MOVE. So it was using DELETE already. I've been doing customer support for a WebDAV server for several years now, and I honestly never had a problem because of a broken proxy (there were case where methods were disabled on *purpose*, but that's a different story). I believe you that those proxies do exist, but maybe they aren't anyway as widely deployed anymore as you think. And in the worst case, using HTTPS gets you around that. I do expect that with deployment of the APP we will see more evidence of restricted proxies, servers, firewalls and client libraries. That will be interesting to watch. > Maybe a move would be as simple as a POST to a resource with a > specified Location header expecting a 301; eg: > > POST /sourceresource HTTP/1.1 > Location: /destresource > => 301 > > Isn't that all you'd need? > > It would be supported by most clients, most servers and most > intermediaries. How would a client ever know without out-of-band information that a POST without request body is a move operation? And since when is Location a request header? Best regards, Julian
On 3/9/07, Julian Reschke <julian.reschke@...> wrote: > [snip] > And since when is Location a request header? Yes, I don't like the idea of using a response header in a request. I suppose you could use Content-Location: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.14 But even so, that leaves unanswered the question of how the client tells the server if the redirect is temporary or permanent. It is this, more than anything else, which gets to the heart of my original question. I can only see a POST|PUT of a representation of the redirect itself (formatted as message/http or possibly application/rdf+xml using the HTTP RDFS) as being a viable solution to this question. It is not simply a question of giving the server a location URI. Alan
Alan Dean schrieb: > On 3/9/07, Julian Reschke <julian.reschke@...> wrote: >> [snip] >> And since when is Location a request header? > > Yes, I don't like the idea of using a response header in a request. > > I suppose you could use Content-Location: > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.14 > > But even so, that leaves unanswered the question of how the client > tells the server if the redirect is temporary or permanent. It is > this, more than anything else, which gets to the heart of my original > question. I can only see a POST|PUT of a representation of the > redirect itself (formatted as message/http or possibly > application/rdf+xml using the HTTP RDFS) as being a viable solution to > this question. It is not simply a question of giving the server a > location URI. Well, or use a new method with well-defined semantics, such as defined in <http://greenbytes.de/tech/webdav/rfc4437.html#METHOD_MKREDIRECTREF>. Best regards, Julian
On 3/9/07, Julian Reschke <julian.reschke@...> wrote: > > Well, or use a new method with well-defined semantics, such as defined > in <http://greenbytes.de/tech/webdav/rfc4437.html#METHOD_MKREDIRECTREF>. > > Best regards, Julian Heh, I am pretty clear by now that you think that MKREDIRECTREF is the answer :-) I don't personally feel that a new method is necessary. Let me explain why. The reason why the MKREDIRECTREF works is that the method carries an entity body that sets out the definition of the redirect: --> MKREDIRECTREF /foo.txt HTTP/1.1 Host: www.example.com Content-Type: text/xml Content-Length: ... <?xml version="1.0" encoding="utf-8" ?> <d:mkredirectref xmlns:d="DAV:"> <d:reftarget><d:href>/bar.txt</d:href></d:reftarget> <d:redirect-lifetime><d:permanent/></d:redirect-lifetime> </d:mkredirectref> The reason that I don't think that a new method is necessary is that the following is equivalent: --> PUT /foo.txt HTTP/1.1 Host: www.example.com Content-Type: text/xml Content-Length: ... <?xml version="1.0" encoding="utf-8" ?> <d:mkredirectref xmlns:d="DAV:"> <d:reftarget><d:href>/bar.txt</d:href></d:reftarget> <d:redirect-lifetime><d:permanent/></d:redirect-lifetime> </d:mkredirectref> So the question is the entity representation, you see, not the method name. or you could do it in RDF: --> PUT /foo.txt HTTP/1.1 Host: www.example.com Content-Type: application/rdf+xml Content-Length: ... <?xml version="1.0" encoding="utf-8" ?> <rdf:RDF xmlns:http="http://www.w3.org/2006/http#"> <http:Response> ... </http:Response> </rdf:RDF> or you could do it as message/http: --> PUT /foo.txt HTTP/1.1 Host: www.example.com Content-Type: message/http Content-Length: ... HTTP/1.1 301 Moved Permanently Location: /bar.txt None of the above require a special method. Regards, Alan Dean
Alan Dean schrieb: > On 3/9/07, Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> wrote: > > > > Well, or use a new method with well-defined semantics, such as defined > > in <http://greenbytes. de/tech/webdav/ rfc4437.html# METHOD_MKREDIREC > TREF <http://greenbytes.de/tech/webdav/rfc4437.html#METHOD_MKREDIRECTREF>>. > > > > Best regards, Julian > > Heh, I am pretty clear by now that you think that MKREDIRECTREF is the > answer :-) Thanks for the clarification. Up to now, I wasn't sure whether you actually had considered it at all. > I don't personally feel that a new method is necessary. Let me explain why. > > The reason why the MKREDIRECTREF works is that the method carries an > entity body that sets out the definition of the redirect: > > --> > MKREDIRECTREF /foo.txt HTTP/1.1 > Host: www.example. com > Content-Type: text/xml > Content-Length: ... > > <?xml version="1.0" encoding="utf- 8" ?> > <d:mkredirectref xmlns:d="DAV: "> > <d:reftarget> <d:href>/ bar.txt</ d:href></ d:reftarget> > <d:redirect- lifetime> <d:permanent/ ></d:redirect- lifetime> > </d:mkredirectref> That's not true. MKREDIRECTREF could have been defined to use request headers instead of an entity body. That's just a detail (and I'm sure may over here will point out that using XML request bodies here is the wrong thing to do...). > The reason that I don't think that a new method is necessary is that > the following is equivalent: > > --> > PUT /foo.txt HTTP/1.1 > Host: www.example. com > Content-Type: text/xml > Content-Length: ... > > <?xml version="1.0" encoding="utf- 8" ?> > <d:mkredirectref xmlns:d="DAV: "> > <d:reftarget> <d:href>/ bar.txt</ d:href></ d:reftarget> > <d:redirect- lifetime> <d:permanent/ ></d:redirect- lifetime> > </d:mkredirectref> > > So the question is the entity representation, you see, not the method name. That doesn't seem to fit into the definition of PUT (<http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.9.6>): "The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server..." > ... Best regards, Julian
On 3/9/07, Julian Reschke <julian.reschke@...> wrote: > > That's not true. MKREDIRECTREF could have been defined to use request > headers instead of an entity body. That's just a detail (and I'm sure > may over here will point out that using XML request bodies here is the > wrong thing to do...). I would interested to know why using an entity body is wrong. Also, what request headers would you use to indicate the temporary/permanent nature of the redirect? So far as I can see, the WebDAV spec you linked to does not define this. In fact it stipulates that "The request body MUST be a DAV:mkredirectref XML element." > > That doesn't seem to fit into the definition of PUT > (<http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.9.6>): > > "The PUT method requests that the enclosed entity be stored under the > supplied Request-URI. If the Request-URI refers to an already existing > resource, the enclosed entity SHOULD be considered as a modified version > of the one residing on the origin server..." I don't see a conflict here. After all, a redirect is not the original representation but is a separate entity (and in my example, I only PUT the redirect after DELETEing the original representation). The end effect of the message set is to replace one entity with an entirely different one (in my example, a text/plain representation is replaced with a message/http). Alan
"Alan Dean" <alan.dean@...> writes: > On 3/9/07, Julian Reschke <julian.reschke@...> wrote: >> >> Well, or use a new method with well-defined semantics, such as defined >> in <http://greenbytes.de/tech/webdav/rfc4437.html#METHOD_MKREDIRECTREF>. >> >> Best regards, Julian > > Heh, I am pretty clear by now that you think that MKREDIRECTREF is the > answer :-) > > I don't personally feel that a new method is necessary. Let me explain why. > > The reason why the MKREDIRECTREF works is that the method carries an > entity body that sets out the definition of the redirect: > > --> > MKREDIRECTREF /foo.txt HTTP/1.1 > Host: www.example.com > Content-Type: text/xml > Content-Length: ... > > <?xml version="1.0" encoding="utf-8" ?> > <d:mkredirectref xmlns:d="DAV:"> > <d:reftarget><d:href>/bar.txt</d:href></d:reftarget> > <d:redirect-lifetime><d:permanent/></d:redirect-lifetime> > </d:mkredirectref> > > The reason that I don't think that a new method is necessary is that > the following is equivalent: > > --> > PUT /foo.txt HTTP/1.1 > Host: www.example.com > Content-Type: text/xml > Content-Length: ... > > <?xml version="1.0" encoding="utf-8" ?> > <d:mkredirectref xmlns:d="DAV:"> > <d:reftarget><d:href>/bar.txt</d:href></d:reftarget> > <d:redirect-lifetime><d:permanent/></d:redirect-lifetime> > </d:mkredirectref> > > So the question is the entity representation, you see, not the method name. > > or you could do it in RDF: > > --> > PUT /foo.txt HTTP/1.1 > Host: www.example.com > Content-Type: application/rdf+xml > Content-Length: ... > > <?xml version="1.0" encoding="utf-8" ?> > <rdf:RDF xmlns:http="http://www.w3.org/2006/http#"> > <http:Response> > ... > </http:Response> > </rdf:RDF> > > or you could do it as message/http: > > --> > PUT /foo.txt HTTP/1.1 > Host: www.example.com > Content-Type: message/http > Content-Length: ... > > HTTP/1.1 301 Moved Permanently > Location: /bar.txt > > None of the above require a special method. I don't like the MKREDIRECTREF method because it requires an XML entity body. Yours is better because it doesn't have the requirement of having an XML parser on hand. Sometimes you just don't want to handle XML. That's what I was getting at with the Location thing. Having said that, the Content-Location thing is better. -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk
On 3/9/07, Nic James Ferrier <nferrier@...> wrote: > > I don't like the MKREDIRECTREF method because it requires an XML > entity body. > > Yours is better because it doesn't have the requirement of having an > XML parser on hand. Sometimes you just don't want to handle XML. > > That's what I was getting at with the Location thing. Having said > that, the Content-Location thing is better. But that doesn't answer how the server is told if the redirect is temporary or permanent. If there is an elegant way of doing so without an entity body, I am entirely open to taking that on board, but I haven't seen it yet. Alan
Alan Dean schrieb: > > > On 3/9/07, Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> wrote: > > > > That's not true. MKREDIRECTREF could have been defined to use request > > headers instead of an entity body. That's just a detail (and I'm sure > > may over here will point out that using XML request bodies here is the > > wrong thing to do...). > > I would interested to know why using an entity body is wrong. Because intermediates that are interested in the message will have to parse it. > Also, what request headers would you use to indicate the > temporary/permanent nature of the redirect? So far as I can see, the > WebDAV spec you linked to does not define this. In fact it stipulates > that "The request body MUST be a DAV:mkredirectref XML element." Yes, but it could have defined new request headers *instead* of a request body. It didn't, therefore those headers do not exist. > > That doesn't seem to fit into the definition of PUT > > (<http://greenbytes. de/tech/webdav/ rfc2616.html# rfc.section. 9.6 > <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.9.6>>): > > > > "The PUT method requests that the enclosed entity be stored under the > > supplied Request-URI. If the Request-URI refers to an already existing > > resource, the enclosed entity SHOULD be considered as a modified version > > of the one residing on the origin server..." > > I don't see a conflict here. After all, a redirect is not the original > representation but is a separate entity (and in my example, I only PUT > the redirect after DELETEing the original representation) . The end > effect of the message set is to replace one entity with an entirely > different one (in my example, a text/plain representation is replaced > with a message/http) . So are you storing the entity? And if you do, why don't you return it upon GET? Best regards, Julian
On 3/9/07, Julian Reschke <julian.reschke@...> wrote: > Alan Dean schrieb: > > > > > > On 3/9/07, Julian Reschke <julian.reschke@ gmx.de > > <mailto:julian.reschke%40gmx.de>> wrote: > > > > > > That's not true. MKREDIRECTREF could have been defined to use request > > > headers instead of an entity body. That's just a detail (and I'm sure > > > may over here will point out that using XML request bodies here is the > > > wrong thing to do...). > > > > I would interested to know why using an entity body is wrong. > > Because intermediates that are interested in the message will have to > parse it. Which intermediaries? If a proxy, then the cached copy will be invalidated by the PUT. > > > Also, what request headers would you use to indicate the > > temporary/permanent nature of the redirect? So far as I can see, the > > WebDAV spec you linked to does not define this. In fact it stipulates > > that "The request body MUST be a DAV:mkredirectref XML element." > > Yes, but it could have defined new request headers *instead* of a > request body. It didn't, therefore those headers do not exist. It didn't define headers - that was the point I was makng in response to your comment that "MKREDIRECTREF could have been defined to use request headers instead of an entity body." > > > > That doesn't seem to fit into the definition of PUT > > > (<http://greenbytes. de/tech/webdav/ rfc2616.html# rfc.section. 9.6 > > <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.9.6>>): > > > > > > "The PUT method requests that the enclosed entity be stored under the > > > supplied Request-URI. If the Request-URI refers to an already existing > > > resource, the enclosed entity SHOULD be considered as a modified version > > > of the one residing on the origin server..." > > > > I don't see a conflict here. After all, a redirect is not the original > > representation but is a separate entity (and in my example, I only PUT > > the redirect after DELETEing the original representation) . The end > > effect of the message set is to replace one entity with an entirely > > different one (in my example, a text/plain representation is replaced > > with a message/http) . > > So are you storing the entity? And if you do, why don't you return it > upon GET? My premise is that the redirect is itself an entity. Yes, you can store it. And yes, you return it upon GET by responding with the appropriate 3xx status. Alan
Alan Dean schrieb: > On 3/9/07, Julian Reschke <julian.reschke@...> wrote: >> Alan Dean schrieb: >> > >> > >> > On 3/9/07, Julian Reschke <julian.reschke@ gmx.de >> > <mailto:julian.reschke%40gmx.de>> wrote: >> > > >> > > That's not true. MKREDIRECTREF could have been defined to use >> request >> > > headers instead of an entity body. That's just a detail (and I'm >> sure >> > > may over here will point out that using XML request bodies here >> is the >> > > wrong thing to do...). >> > >> > I would interested to know why using an entity body is wrong. >> >> Because intermediates that are interested in the message will have to >> parse it. > > Which intermediaries? If a proxy, then the cached copy will be > invalidated by the PUT. An intermediate that is interested in the redirect generation itself. It would need to parse the XML to find out about the link target. >> So are you storing the entity? And if you do, why don't you return it >> upon GET? > > My premise is that the redirect is itself an entity. Yes, you can > store it. And yes, you return it upon GET by responding with the > appropriate 3xx status. Well, returning a 3xx means that the entity the client asked for is somewhere else. So if you have /a redirecting to /b, the GET response body for /a is *not* the requested entity. Best regards, Julian
* Bill de hOra <bill@...> [2007-03-04 18:00]: > I think "APP" could be confusing, but it's handier than typing > in "Atom Protocol". The term’s main failing is googlability. Expanding only the first letter yields “AtomPP” which is specific, seems to have been invented independently by multiple different people and is already in some use on the web. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
+1 for AtomPP for googlability. (Is that word in the dictionary?) Henry On 9 Mar 2007, at 15:01, A. Pagaltzis wrote: > * Bill de hOra <bill@dehora.net> [2007-03-04 18:00]: > > I think "APP" could be confusing, but it's handier than typing > > in "Atom Protocol". > > The terms main failing is googlability. Expanding only the first > letter yields AtomPP which is specific, seems to have been > invented independently by multiple different people and is > already in some use on the web. > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> > >
I just search for Atom APP and it works for me... but I'm fine with typing Atom APP AtomPP to catch everything :) -John Henry Story wrote: > > +1 for AtomPP for googlability. > > (Is that word in the dictionary?) > > Henry > > On 9 Mar 2007, at 15:01, A. Pagaltzis wrote: > >> * Bill de hOra <bill@...> [2007-03-04 18:00]: >> > I think "APP" could be confusing, but it's handier than typing >> > in "Atom Protocol". >> >> The term�s main failing is googlability. Expanding only the first >> letter yields �AtomPP� which is specific, seems to have been >> invented independently by multiple different people and is >> already in some use on the web. >> >> Regards, >> -- >> Aristotle Pagaltzis // <http://plasmasturm.org/> >> >> > -- Abstractioneer <http://feeds.feedburner.com/aol/SzHO>John Panzer System Architect http://abstractioneer.org
I have to say that I agree with Elliotte Rusty Harold's blog post that PUT is not the same thing as UPDATE (http://cafe.elharo.com/web/put-is-not-update/). The SQL analogy that is so prevalent in REST 101 tutorials hurt me more than the helped me as I was coming up to speed. It's just confusing. I think a better alias for PUT is "SET". Its connotations fit the method much better and pair it up nicely with GET -- programmers usually think of accessor methods when they hear "get" and "set". Its not a perfect analogy as an accessor method usually isn't setting or getting the full object. But if you instead think of your resource space as the "object" then accessors getX() or setX() on member X map to the semantics of GET and PUT on resource X. If you also add that X might start out as NULL or undefined until you call setX() then you've pretty much covered it. However, if you pair GET and PUT in this way, it leaves you with POST and DELETE. With the CRUD analogy, you have CREATE and DELETE -- another great pair. But POST is so much more than CREATE. In both the HTTP RFC and common (restful) practice, POST is generalized to a transformation of the target resource and its subordinate resource space. However, this transformation is usually some sort of augmentation of the resource and/or its subordinates. For example, you are creating a new resource, or appending information to the target resource, etc. The examples in the definition of POST all have this flavor. Even when a POST just submits something for processing, you've added the something to an invisible queue of work. Unfortunately, the lowly DELETE method is not such a great match for the almighty POST. It deletes the target resource... and that's about it. It makes you wonder: should DELETE be given a shot of steroids? Or do we need a new method to represent a general reduction of the resource and/or subordinates? This issue sort of jumps out at you when you consider implementing a REST interface to some sort of queue. You immediately arrive at POSTing to the queue resource to enqueue data (e.g. POST to http://example.com/queue). But dequeuing data, on the other hand, is not so obvious. One solution is proposed here: http://www.xml.com/pub/a/2005/01/05/restful.html But as pointed out in the comments, there seems to be a race-condition issue with multiple consumers. The reply to the comment proposes a work around, but unless I'm mistaken uses an unsafe version of GET as it essentially dequeues data in the read. Regardless of the specific issues though -- dequeuing is certainly not a straight forward as enqueuing. Perhaps DELETE can be extended to make this easier. Allowing DELETE to return the contents of the delete resource helps. This isn't strictly forbidden by HTTP as the DELETE response can have a body. But ideally you also want to be able to target the operation on a resource that represents the head of the queue rather than the resource to be deleted (e.g. http://example.com/queue/head). This saves consumers from having to co-ordinate ownership of dequeued resources before they DELETE them. So you'd want http://example.com/queue/head to be an alias for the resource currently at the front of the queue; when you DELETE the alias, you actually delete the resource at the front of the queue and the alias stays intact. But that sort of behavior is not allowed by DELETE right now -- its not idempotent. Another possibility is to add a new method, say PULL, that performs some reduction transformation on the resource and its subordinates and returns the removed data. A 2xx "Deleted" response code could be added to indicate when the method resulted in a resource being deleted. The Location header could name the victim. With this method, you add to the queue with a POST to http://example.com/queue and remove from the queue with a PULL from the same resource. Finally, one could just say that POST is a transformation that is typically an augmentation -- but not always. That let's you POST to http://example.com/queue/tail to enqueue and POST to http://example.com/queue/head to dequeue. Does anyone else see this as a problem? This lack of symmetry does seem to be related to a lot of the headaches I come across when trying to do things RESTfully. Are there alternative solutions that I'm missing? Andrew Wahbe
Hi All, This is the first message from me to this group.found this grp to be more interactive and hoping so that we can share our knowledge. I dont know how to start with rest.seeing your great messages and usage of REST i found it to be little bit tricky.................. it would be very helpfull if any one from the group can help me in running a sample ..........i think that there would be at least one out in the group to send me a sample......... Thanks a lot........welcome me by sending a sample.............. Thanks, Vikranth
"vikranth" <kvikranth@...> writes: > in the group to send me a sample......... You provide me with a bottle and I'll give you a sample. -- Nic Ferrier ---------------------------------------------------------- Need a linux/java/python/web hacker? I'm in need of work! ---------------------------------------------------------- http://www.tapsellferrier.co.uk
[ Attachment content not displayed ]
Hi Vikranth, If you are familiar with the Java programming language, you can check out this tutorial: http://www.restlet.org/documentation/1.0/tutorial Best regards, Jerome
It seems like only last month that Sun were putting together a JSR to come up with some RESTy APIs for java, and yet already there is an Early Access release of a set of REST APIs: http://developers.sun.com/web/swdp/docs/OnePagerREST_APIs.html Either JSR-311 has been incredibly busy, or the JCP has more in common with ECMA's standardisation of MS OOXML than that of normal work groups. -Not looked at the API itself; someone should have a play... -steve
--- "Steve Loughran" <steve.loughran.soapbuilders@...> wrote: > > It seems like only last month that Sun were putting together a JSR to > come up with some RESTy APIs for java, and yet already there is an > Early Access release of a set of REST APIs: > http://developers.sun.com/web/swdp/docs/OnePagerREST_APIs.html > It really should come as no surprise that we've been experimenting with APIs prior to starting a JSR. The early access API we just released is a snapshot of our thinking from back in December, we thought it would be good to get something out there for people to play with and hopefully elicit some feedback. > Either JSR-311 has been incredibly busy, or the JCP has more in common > with ECMA's standardisation of MS OOXML than that of normal work > groups. > The JSR hasn't kicked off yet, we hope to get things started early next month once all the administrative details are taken care of. I don't expect the output of the JSR to be a rubberstamp of the API we just released but building it provided useful experience for us and has helped shape our thinking on the JSR. > -Not looked at the API itself; someone should have a play... > Please do, all feedback welcome. Marc.
Is there an actual implementation somewhere? That link just seems to lead to one page with no links. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
http://developers.sun.com/web/swdp/ -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Elliotte Harold Sent: Tuesday, March 13, 2007 11:52 AM To: marc_hadley Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: RESTful Web Services API (Early Access Is there an actual implementation somewhere? That link just seems to lead to one page with no links. -- Elliotte Rusty Harold elharo@metalab.unc.edu <mailto:elharo%40metalab.unc.edu> Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ <http://www.cafeaulait.org/books/javaio2/> http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ <http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/>
Elliotte Harold wrote:
> Is there an actual implementation somewhere? That link just seems to
> lead to one page with no links.
>
See here [1] for links to more documentation (including JavaDoc).
Paul.
[1] http://blogs.sun.com/sandoz/entry/documentation_for_restful_java_api
--
| ? + ? = To question
----------------\
Paul Sandoz
x38109
+33-4-76188109
According to Paul; http://blog.whatfettle.com/2007/03/15/qcon-soa-v-rest-slides/ BTW, I'm heading out of range of an HTTP message for a week; if somebody would like to volunteer to take over moderation duties on the list (everybody's first post needs to be manually approved, as a spam fighting measure), please let me know. Otherwise, enjoy a week of no newbies! 8-) Mark.
I've got a vict^H^H^Holunteer, thanks! On 3/15/07, Mark Baker <distobj@...> wrote: > BTW, I'm heading out of range of an HTTP message for a week; if > somebody would like to volunteer to take over moderation duties on the > list (everybody's first post needs to be manually approved, as a spam > fighting measure), please let me know. Otherwise, enjoy a week of no > newbies! 8-)
[ Attachment content not displayed ]
On Thu, 2007-03-08 at 19:34 +0000, Alan Dean wrote: > I've got another nitty-gritty issue to get feedback from the group on. > I have been thinking about RESTful move operations. Obviously, a > RESTful system should not require a MOVE verb, and the > operation can be carried out as follows (assume that the > representation already exists): Answer 1: It is a bad idea to "MOVE" resources. You aren't ever really moving them. You are creating another resource with the same state, and making the original resource a redirect to the new one. While that might result in an owl:sameAs equivalence between the resources... you generally should be asking yourself why you need to do this. A resource's url should normally contain as little information as possible: Only enough to identify it. When you change its url you are changing its identification, and changing the resource. While this might make sense in a document publishing world where you just want to edit your blog, it is usually a bad idea in the machine-to-machine REST practice. Answer 2: As per your instructions: --> > GET /foo.txt > > <-- > HTTP/1.1 200 OK > ETag: "hash" > Content-Type: text/plain > Content-Length: 16 > > this is the body > > --> > PUT /bar.txt > If-None-Match: * > Content-Type: text/plain > Content-Length: 16 > > this is the body > > <-- > HTTP/1.1 201 Created > > --> > DELETE /foo.txt > If-Match: "hash" > > <-- > HTTP/1.1 204 No Content Then: --> PUT /.htaccess HTTP/1.1 (headers) redirectMatch permanent ^/foo.txt$ /bar.txt <-- HTTP/1.1 201 Created This is the right-way-to-do-it(tm) in REST style. Don't introduce new methods to deal with "properties". Instead, interact with a resource (in this case the .htaccess file) that defines those properties. Use a content-type that is understood by this resource to define the properties. Technically you should probably be registering the format of that htaccess file with the iana and making its content an rfc, but that is the only place this approach really falls short. As for MOVE... well that has been covered. Don't use it. You have no right to assume that the server you send the MOVE to will be able to operate on the two identified resources simultaneously. It could be a mashup of several different services. MOVE introduces unnecessary coupling, let alone its other failures already noted in this and other threads. Its benefits do not outweigh its problems. Benjamin.
On 3/17/07, Benjamin Carlyle <benjamincarlyle@...> wrote: [snip] > > Then: > > --> > PUT /.htaccess HTTP/1.1 > (headers) > > redirectMatch permanent ^/foo.txt$ /bar.txt > > <-- > HTTP/1.1 201 Created > > This is the right-way-to-do-it(tm) in REST style. Don't introduce new > methods to deal with "properties". Instead, interact with a resource (in > this case the .htaccess file) that defines those properties. Use a > content-type that is understood by this resource to define the > properties. Technically you should probably be registering the format of > that htaccess file with the iana and making its content an rfc, but that > is the only place this approach really falls short. Can't say that I like the idea of making an internal structure public, such as an htaccess file (which is specific to Apache anyway afaik, and I'm not using Apache). I have the feeling that there are numerous security issues with that. Plus, as you say, there is no registered mime type. This is why I was investigating the possibility of using the message/http mime type. > As for MOVE... well that has been covered. Don't use it. You have no > right to assume that the server you send the MOVE to will be able to > operate on the two identified resources simultaneously. It could be a > mashup of several different services. MOVE introduces unnecessary > coupling, let alone its other failures already noted in this and other > threads. Its benefits do not outweigh its problems. I wasn't intending to use MOVE, simply trying to identify how to restfully carry out an operation that "looks like a move", followed by establishing a temporary or permanent redirect as a user-agent decision. Regards, Alan
On Thu, 2007-03-01 at 19:56 +0100, Danny Ayers wrote: > On 01/03/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > > On Sun, 2007-02-25 at 21:53 +0100, Danny Ayers wrote: > > > The Web is a graph structure. > > That's fine in the abstract sense, > More than that, my user agent (even if it's just a browser) can wander > around that graph. > but > > * An atom document has an atom structure > > * A html document has a html structure > Both of these will describe part of a graph-shaped model, because of > the links they contain. That's true, however that does not have a necessary impact on content type design. While a graph may be a suitable structure for conveying statements about resources, I posit that it is not the most suitable structure for conveying information about resources. I think RDF is inside-out when it comes to uniform messaging of the kind that REST demands. It concentrates on a generic form of representing statements without incorporating the requirements of a particular vocabulary. I think that the vocabulary or kind of information has more of an impact on what makes a good or a bad representation than RDF gives it credit for. I think Stu is right that this is the old object/relational battle all over again and that it depends on your perspective as to which is important. I guess I am on the object side of the fence. > > * A train list document has a train list structure > Ok, a list can be expressed directly in a tree or a graph. A graph is rarely the data structure of choice when working with data that could be stored a list. Graphs carry with them intrinsic algorithmic complexity that lists side-step. Choosing a more specific type reduces the cost of developing an application. Choosing a more general type to convey the same information increases cost. It forces information consumers to accomodate possible variation in structure that does not exist in practice and employ more complex algorithms than necessary. There is a danger in getting too specific that we introduce coupling between the source of the data and the data sink. It could be that we get too specific and fail to be understood when we might have been understood if we had used more generic terminology. I am sure this issue applies for vocabulary, but I am not sure it applies to document structure that might flow from that vocabulary. When two components exchange messages the content must be of a known type that the components agree on. The source component encodes its internal data structures into the form of the representation. The sink populates its internal data structures from the representation. Depending on the importance of a particular content type to each, the internal data structures will have a greater or lesser alignment to the structure of the represenation. Components that speak a lot of atom are likely to use structures that name atom elements. Components that speak a lot of html are likely to align to the html specification. Sometimes components will store their information in generic infosets such as a DOM or an RDF graph. Where components do store information and manipulate it in the form of an RDF graph, RDF is a win. Where components store information and manipulate it as an XML DOM or as any other structure, RDF is a cost and a burden. I still could be wrong on this issue, but I suspect that the number of components that actually do use RDF internally is quite a small subset of applications. I don't see this figure changing much either, with the inherent complexity of dealing with graphs that RDF introduces. Moreover, the subset of applications that do work with RDF internally can still understand data that comes into them in non-RDF standard content type representations. An RDF app can still consume an atom document. It just costs more to do so than consuming an atom+rdf +something document. It really depends on the balance of applications using RDF internally and those that don't as to which approach will carry the most value in the long term: Standard document types built around the requirements of information in those document types, or standard RDF vocabularies? I think the balance falls one way. Reasonable people may see it falling the other way. In terms of REST, I see no more value in using application/rdf+xml than in using application/xml. That is to say, in terms of messages on the network I think that you need to be specific about the vocabulary you are using and about any related document structure. I still have some thoughts to work through in terms of envelope types, but generally I think that the content type needs to be pretty close to a description of the whole message. Ultimately, knowing that a content type is shared between resource and client should be enough to say that one will understand the output of the other. It should be enough to say that I'll be able to plug them together. RDF or XML as the document type are only sufficient when the client doesn't need to understand the information returned from a GET request. In general, someone needs to understand eventually. If you retrieved information in terms of the wrong vocabulary there might be hell to pay when processing occurs on the data. > > These are the structures I really want to get at when I process > > information from another component in the network. > Are all your local data models trees? I would suggest that the majority of data that needs to be transferred between machines can be represented as simple structures (ie, class with member variables) with a few vectors throw in. Graph interconnects can be described in the same way RDF describes them: With URIs. They can even be described in more context-sensitive ways if that is appropriate. However, most of the time I think that the graph part of the model is subservient to the simpler parts of the model. Semi-structured data like HTML's text with tags are probably one of the more important exceptions, but RDF doesn't handle these any differently to the way an XML document would. Both would just include a bit of special XML to deal with it. > Note also that there are several non-XML RDF syntaxes, and that many > non-RDF syntaxes can be interpreted directly as RDF (e.g. Raptor has > an Atom parser). I understand that. It is graph model in memory that I have more of an issue with, as compared to the tree model of in-memory XML. If I don't want to work with either model, I have to do more work processing the graph than I do processing the tree. If I want to use use the tree model of XML then RDF is a step backwards. If I am happy to use the graph as my data model then RDF is a win, but I don't think that is the common case for distributed software that could form the basis of a semantic web. I see the atom experience being repeated as the way we will achieve semantic nirvana: One small step at a time, based on as simple an XML structure as the underyling data model for a particular problem-space will allow. > > If it is a prerequisite of the machine-processable web to have fully > > self-describing documents, then we can always translate these to RDF > for > > our storage needs if we really want to. In the mean-time, I would > > suggest that RDF complicates the common case in favour of an > uncommon > > case that can be solved in a different way once the common case is > dealt > > with. > I would see that the other way around, that RDF doesn't complicate the > common case because there's no conflict with passing around XML. But > when you need to integrate data across domains, RDF is mighty handy. I still haven't seen this kind of data integration working, especially within REST constraints of a message with a content type that describes its content. I look at how rss was not aggregated at the RDF level, but at a higher level and take from that that RDF doesn't provide automatic aggregation within a vocabulary. I look at the problem of describing the content type of a mixed rdf document and take from that that RDF doesn't have a good answer for exchanging multi-vocabulary documents in a RESTful way. What I do see in RDF is a foundation for the next generation of RDBMS. I do see in RDF a possible foundation for reasoning-based languages. I think both functions are useful within a particular service boundary or behind a particular firewall. I don't think these functions are important to integrate into the main message-exchanging uniform Web. I think the cost of that integration outweighs the benefits both now and for the forseeable future. If the kind of reasoning-based languages that RDF supports become more important this balance may shift, and it may be valuable to put data into the Web in RDF form. I guess this is what current RDF proponents are counting on. I am dubious about these langauges supplanting the world's current crop, but maybe I just haven't caught the bug yet. I think that RDF still has some hurdles to jump, as well, in terms of working with REST. Thanks for the discussion so far, gents. I guess where I am at at the moment is that RDF is an expensive unnecessary complication for today's software languages and techniques. If we replaced all of those techniques with RDF-centric reasoning-based languages RDF might tip the balance towards being useful. I'm a conservative kind of guy who sees a system that works today and can work well with RDF-based systems, and I guess I don't see the impetus for change. However I can also see that developers pushing for an RDF tipping point might eventually gather enough RDF data to make the reasoning-based approach compelling. I suppose I'll have to think on it for another few years. I am wary of RDF's lack of an internal architectural style to avoid vocabulary proliferation. I think it will need to develop ways of developing communities around particular vocabularies, including the extension of those vocabularies. I don't think RDF successfully leverages REST's style in dealing with this aspect of its evolution, and I have a gut feel that the way RDF interacts with namepaces and universality will all end in tears. For now I will continue to develop without RDF. Perhaps after I see a few examples of dominant RDF vocabularies evolving and seeing real popular use on the web over the course of a decade or so I'll have cause to change my mind. Benjamin.
On 17 Mar 2007, at 14:34, Benjamin Carlyle wrote: > I think Stu is right that this is the old object/relational battle > all over again and that it depends on your perspective as to which is > important. I guess I am on the object side of the fence. ? OO languages are just dealing with graphs of objects. There is not side of the fence to sit on here. They are pretty isomorphic. Henry
--- Benjamin Carlyle <benjamincarlyle@...> wrote: > Moreover, the subset of applications that do work with RDF internally > can still understand data that comes into them in non-RDF standard > content type representations. An RDF app can still consume an atom > document. It just costs more to do so than consuming an atom+rdf > +something document. It really depends on the balance of applications > using RDF internally and those that don't as to which approach will > carry the most value in the long term: Standard document types built > around the requirements of information in those document types, or > standard RDF vocabularies? I think the balance falls one way. Sure. It's unclear that RDF/XML should be the media type of choice for all representation exchange. As I think you're suggesting, the major problem is that it abstracts the processing model of the representation (i.e. do I really need to reify a graph to figure out which URLs are dereferencable & by what method? Can't I just have <myelement uri="place/stuff/here" method="PUT/"> ? ) Having said that, it probably should be the media type of choice for interoperable data analysis and/or inference, if only because there isn't much else out there (and would it be better?) Plus, [my favorite media type] + GRDDL seems to be the latest direction of the Semantic Web, instead of "RDF/XML uber alles". > I see the atom experience being repeated as the way we will achieve > semantic nirvana: One small step at a time, based on as simple an XML > structure as the underyling data model for a particular problem-space > will allow. This I'm not sure of. The problem is that the experiences with REST hypermedia processing models like HTML, Atom, RSS, et al, are about specifying on the semantics of "containers". Which , while difficult at internet scale, is something we're fairly experienced at, because it's generally independent of one's value system or doesn't really draw out divisive economic interests. When we get into cross-domain interoperability -- when data elements have actual *value* to mean a different thing based on one's perspective / lifestyle / economic system / position / etc, we probably need to raise the level of agreement from one of syntax to first or second order logic. That's where I think there's reason to believe that we can achieve this "one small step at a time" also with GRDDL, RDF, OWL, etc > What I do see in RDF is a foundation for the next generation of > RDBMS. I > do see in RDF a possible foundation for reasoning-based languages. I > think both functions are useful within a particular service boundary > or > behind a particular firewall. I don't think these functions are > important to integrate into the main message-exchanging uniform Web. As I said above, I think they'll be crucial if we want the web to tackle cross-domain interoperability. Remember, the RDBMS did succeed at cross-domain interop, if you aligned the politics & got the funding -- the infamous data warehouse still fuels most of our business intelligence. The real key with the semantic web will be to take those lessons and apply them to "external information", which is typically more valuable then "internal information" anyway.. Cheers Stu ____________________________________________________________________________________ It's here! Your new message! Get new email alerts with the free Yahoo! Toolbar. http://tools.search.yahoo.com/toolbar/features/mail/
Benjamin Carlyle wrote: > I see the atom experience being repeated as the way we will achieve > semantic nirvana: One small step at a time, based on as simple an XML > structure as the underyling data model for a particular problem-space > will allow. That may be, but, barring a major breakthrough in mathematical logic, the state of the art suggests there are only a few places place such data models end up. RDF is already at one - Tarskian semantics. > >>> If it is a prerequisite of the machine-processable web to have fully >>> self-describing documents, then we can always translate these to RDF >> for >>> our storage needs if we really want to. In the mean-time, I would >>> suggest that RDF complicates the common case in favour of an >> uncommon >>> case that can be solved in a different way once the common case is >> dealt >>> with. >> I would see that the other way around, that RDF doesn't complicate the >> common case because there's no conflict with passing around XML. But >> when you need to integrate data across domains, RDF is mighty handy. > > I still haven't seen this kind of data integration working, especially > within REST constraints of a message with a content type that describes > its content. I look at how rss was not aggregated at the RDF level, but > at a higher level and take from that that RDF doesn't provide automatic > aggregation within a vocabulary. What "higher level" would that be? > I look at the problem of describing the > content type of a mixed rdf document and take from that that RDF doesn't > have a good answer for exchanging multi-vocabulary documents in a > RESTful way. Neither does XML. No browsers support XML namespaces properly; what they will or will not do with microformats is unclear. This problem is not restricted to RDF. But the consequences of applying inappropriate or unwarranted reasoners to RDF based data are probably serious, as they'll be very hard to debug or unravel. Then again, when we are writing the Bitter REST books 7 years from now, I imagine conneg and self-description will get a chapter each. > Thanks for the discussion so far, gents. I guess where I am at at the > moment is that RDF is an expensive unnecessary complication for today's > software languages and techniques. If we replaced all of those > techniques with RDF-centric reasoning-based languages RDF might tip the > balance towards being useful. I agree with this overall position, but with very, very few of the arguments you used to justify it, at least the ones I can understand (your graph/tree position is lost on me; there's no free lunch for data structures and I can't reconcile you stated preference for objects with your disdain for graph processing). The counter-position worth considering is that today's software languages and techniques are quite poor given the problem spaces. The dollar cost of integration and maintenance is the bulk of current "IT" spending. cheers Bill
On 19 Mar 2007 18:15:12 -0700, Bill de hOra <bill@...> wrote: > > Neither does XML. No browsers support XML namespaces properly; This comment is incorrect. There are some browser features that don't support XML namespaces properly. Firefox's Live Bookmarks feature would be one (the feed preview screen has no known namespace bugs). -- Robert Sayre
Hi all, Last week, Sean Landis made a very nice presentation about REST and the Restlet project at the Utah Java Users Group. The slides as well as the sample code are publicly available on our Wiki: http://wiki.restlet.org/#Presentations Direct link to PowerPoint slides: http://restlet.tigris.org/files/documents/3375/36973/Restlet.pps Best regards, Jerome Louvel -- http://www.restlet.org
Minor note PUT = Create a resource Using 'create' can be confusing - perhaps 'set the state' of a resource is better. The reason 'create' can be confusing is that folks may assume a PUT to a pre-existing resource would be an error, when it isn't an error to set the state of a pre-existing resource. > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jrme Louvel > Sent: Tuesday, March 20, 2007 4:19 AM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Restlet presentation slides > > Hi all, > > Last week, Sean Landis made a very nice presentation about > REST and the Restlet project at the Utah Java Users Group. > The slides as well as the sample code are publicly available > on our Wiki: > http://wiki.restlet.org/#Presentations > > Direct link to PowerPoint slides: > http://restlet.tigris.org/files/documents/3375/36973/Restlet.pps > > Best regards, > Jerome Louvel > -- > http://www.restlet.org > > > > ------------------------ Yahoo! Groups Sponsor > --------------------~--> Check out the new improvements in > Yahoo! Groups email. > http://us.click.yahoo.com/4It09A/fOaOAA/yQLSAA/W6uqlB/TM > -------------------------------------------------------------- > ------~-> > > > Yahoo! Groups Links > > >
> Minor note > PUT = Create a resource > > Using 'create' can be confusing - perhaps 'set the state' of a resource is > better. > The reason 'create' can be confusing is that folks may assume a PUT to a > pre-existing resource would be an error, when it isn't an error to set the > state of a pre-existing resource. I'd confuse that exactly the opposite way! It must exist to have a state to change? I found the slides refreshingly clear. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
On 20 Mar 2007 19:29:52 -0700, Mike Dierken <dierken@...> wrote: > > Minor note > PUT = Create a resource > > Using 'create' can be confusing - perhaps 'set the state' of a resource is > better. > The reason 'create' can be confusing is that folks may assume a PUT to a > pre-existing resource would be an error, when it isn't an error to set the > state of a pre-existing resource. I think of PUT as "set whole resource" (which is obviously valid whether the resource pre-exists or not) and POST as either "append to resource" or "append new child (aka subordinate resource) to resource" where the append may or may not be permitted, depending upon the server. So, yes, simply saying "create" could be confusing if the audience doesn't already hold a mental model of what RESTfulness is. Alan Dean http://thoughtpad.net/who/alan-dean/
Slide 13 has the following: GET retrieve a resource PUT create a resource POST update (create if necessary) a resource DELETE delete a resource I don't particularly like the CRUD analogy anyway, but if it's used at all, I believe PUT and POST should be swapped. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On Mar 21, 2007, at 9:04 AM, Dave Pawson wrote: > > Minor note > > PUT = Create a resource > > > > Using 'create' can be confusing - perhaps 'set the state' of a > resource is > > better. > > The reason 'create' can be confusing is that folks may assume a > PUT to a > > pre-existing resource would be an error, when it isn't an error > to set the > > state of a pre-existing resource. > > I'd confuse that exactly the opposite way! It must exist to have a > state to change? > > I found the slides refreshingly clear. > > regards > -- > Dave Pawson > XSLT XSL-FO FAQ. > http://www.dpawson.co.uk > >
On 3/21/07, Stefan Tilkov <stefan.tilkov@innoq.com> wrote: > Slide 13 has the following: > > GET – retrieve a resource > PUT – create a resource > POST – update (create if necessary) a resource > DELETE – delete a resource > > I don't particularly like the CRUD analogy anyway, but if it's used > at all, I believe PUT and POST should be swapped. Interestingly, I have been having a discussion with Steve Maine on a very similar subject after his blog entry "Musings on PUT and POST" here http://hyperthink.net/blog/2007/03/15/Musings+On+PUT+And+POST.aspx He regards himself as a "POST purist" and feels that POST should be used for creation, not PUT. APP takes the same stance too, see http://bitworking.org/projects/atom/draft-ietf-atompub-protocol-14.html#post-to-create I have to say that I don't see it that way. Here is an excerpt of my conversation with Steve: [quote] I am going to agree with one aspect what you are saying Steve - namely that the SQL analogy is a red herring. However, I am going to disagree with the thrust of the statement that "reducing PUT to bitblt() is a pretty limiting view of HTTP." (This disagreement assumes that I have understood what you are trying to say, of course). As far as I am concerned, an HTTP PUT is a "full representation create / overwrite". This is based upon RFC2616, which states that "The PUT method requests that the enclosed entity be stored under the supplied Request-URI." and explicitly that "If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server." Please see http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6 I believe that this is not a "partial representation operation", but a "whole representation operation". What separates HTTP PUT from SQL INSERT / UPDATE is that the Content-Type of the representation can be dynamic, if the server chooses. For example, a server may accept a representation of type text/xml or application/xml. Or maybe a specific xml type, such as application/atom+xml. This is uncontroversial, but it may also choose to accept an entirely different type and internally handle the necessary transformation to the desired type. Moving onto HTTP POST. RFC2616 states that "The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line." For me, the key word there is 'subordinate'. This means that, unlike PUT, POST is a "partial representation operation". Please see http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5 Note that both PUT and POST can redirect the user-agent after the operation: PUT = "If the server desires that the request be applied to a different URI, it MUST send a 301 (Moved Permanently) response; the user agent MAY then make its own decision regarding whether or not to redirect the request." POST = "the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource." [end-quote] Alan Dean http://thoughtpad.net/who/alan-dean/
On 3/21/07, Alan Dean <alan.dean@...> wrote:
> He regards himself as a "POST purist" and feels that POST should be
> used for creation, not PUT.
I tend to agree; PUT can be used that way. I think it boils down to ;
- if you know an identified resource, use PUT for creation and full update
- if you only know its "container" or don't know the created resources
identity (real or virtual), use POST for creation
> APP takes the same stance too, see
> http://bitworking.org/projects/atom/draft-ietf-atompub-protocol-14.html#post-to-create
Yeah, there's been a flurry of activity on the APP mailing-list of
late. I think a lot of the miscommunication in this area comes down to
wheter your resource id is known or unknown. I tend to think of POST
as when I need to ask questions (when I don't know the URI of the
resource in question), while with the REST I "know" what I'm doing
(specific URI's). The typical example is ;
Already knows the user_id
--------------------------------------
/user/{user_id}
GET /user/1234 --> 404
PUT /user/1234 ?data --> 200
GET /user/1234 --> 200
Don't know what user_id to pass in
--------------------------------------------------
/user
GET /user --> 200 + list representation
POST /user ?data --> 200 + user representation (with "user_id=1234")
PUT /user/1234 ?data --> 200
Alex
--
---------------------------------------------------------------------------
Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps
------------------------------------------ http://shelter.nu/blog/ --------
Stefan Tilkov wrote: > Slide 13 has the following: > > GET – retrieve a resource > PUT – create a resource > POST – update (create if necessary) a resource > DELETE – delete a resource > > I don't particularly like the CRUD analogy anyway, but if it's used > at all, I believe PUT and POST should be swapped. GET - Retrieve information about resource PUT - Update (create if necessary) a resource DELETE - Delete a resource POST - Have a resource process another (possibly null) resource and retrieve either the results of that, or information about a resource that was updated or created as a result. This can include duplicating the functionality of the three other verbs, though unless there is a good reason for doing this it is probably indicative of poor design. Not as snappy as forcing an analogy to CRUD, I'll admit.
Jon Hanna wrote: > Not as snappy as forcing an analogy to CRUD, I'll admit. Though it's not that far away from CRUDE.
On Mar 21, 2007, at 11:46 AM, Jon Hanna wrote: > GET - Retrieve information about resource > PUT - Update (create if necessary) a resource > DELETE - Delete a resource > POST - Have a resource process another (possibly null) resource and > retrieve either the results of that, or information about a resource > that was updated or created as a result. This can include duplicating > the functionality of the three other verbs, though unless there is a > good reason for doing this it is probably indicative of poor design. +1. Exactly my view (and as far as I can tell, the majority RESTian opinion). Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Jon Hanna wrote: > Stefan Tilkov wrote: >> Slide 13 has the following: >> >> GET retrieve a resource >> PUT create a resource >> POST update (create if necessary) a resource >> DELETE delete a resource >> >> I don't particularly like the CRUD analogy anyway, but if it's used >> at all, I believe PUT and POST should be swapped. > > GET - Retrieve information about resource > PUT - Update (create if necessary) a resource > DELETE - Delete a resource > POST - Have a resource process another (possibly null) resource and > retrieve either the results of that, or information about a resource > that was updated or created as a result. This can include duplicating > the functionality of the three other verbs, though unless there is a > good reason for doing this it is probably indicative of poor design. > > Not as snappy as forcing an analogy to CRUD, I'll admit. Perhaps snappier: GET - Retrieve representation of resource PUT - Update (create if necessary) a resource with the given representation DELETE - Delete a resource POST - Do something, possibly based on the given representation
Hi, maybe this is of interest: http://www.markbaker.ca/2001/09/draft-baker-http-resource-state- model-01.txt ...not directly related but mentioned far too seldom, IMO. Jan On 21.03.2007, at 10:56, Stefan Tilkov wrote: > Slide 13 has the following: > > GET retrieve a resource > PUT create a resource > POST update (create if necessary) a resource > DELETE delete a resource > > I don't particularly like the CRUD analogy anyway, but if it's used > at all, I believe PUT and POST should be swapped. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > On Mar 21, 2007, at 9:04 AM, Dave Pawson wrote: > >>> Minor note >>> PUT = Create a resource >>> >>> Using 'create' can be confusing - perhaps 'set the state' of a >> resource is >>> better. >>> The reason 'create' can be confusing is that folks may assume a >> PUT to a >>> pre-existing resource would be an error, when it isn't an error >> to set the >>> state of a pre-existing resource. >> >> I'd confuse that exactly the opposite way! It must exist to have a >> state to change? >> >> I found the slides refreshingly clear. >> >> regards >> -- >> Dave Pawson >> XSLT XSL-FO FAQ. >> http://www.dpawson.co.uk >> >> > > > > > Yahoo! Groups Links > > >
Alan Dean wrote: > On 3/21/07, Stefan Tilkov <stefan.tilkov@...> wrote: >> Slide 13 has the following: >> >> GET – retrieve a resource >> PUT – create a resource >> POST – update (create if necessary) a resource >> DELETE – delete a resource >> >> I don't particularly like the CRUD analogy anyway, but if it's used >> at all, I believe PUT and POST should be swapped. > > Interestingly, I have been having a discussion with Steve Maine on a > very similar subject after his blog entry "Musings on PUT and POST" > here http://hyperthink.net/blog/2007/03/15/Musings+On+PUT+And+POST.aspx > > He regards himself as a "POST purist" and feels that POST should be > used for creation, not PUT. > > APP takes the same stance too, see > http://bitworking.org/projects/atom/draft-ietf-atompub-protocol-14.html#post-to-create > > I have to say that I don't see it that way. Here is an excerpt of my > conversation with Steve: > > [quote] > > I am going to agree with one aspect what you are saying Steve - namely > that the SQL analogy is a red herring. > > However, I am going to disagree with the thrust of the statement that > "reducing PUT to bitblt() is a pretty limiting view of HTTP." This is all down to server policy. If a server doesn't want a client to be able to create arbitrary resources without first going through, it can complain vehemently that the request is Forbidden along with an explanation of why. That said, if it makes sense for the application running on the server to allow clients to create resources with arbitrary URIs, all's well. Purity is for schmucks. Unless there's a good reason why PUT shouldn't be allowed create resources, the application should accept it. K.
Keith Gaughan wrote: > Purity is for schmucks. Unless there's a good reason why PUT shouldn't be > allowed create resources, the application should accept it. Funny, I consider allowing PUT to be the purer form. Just goes to show how subjective "purity" is. A lesson for our modern pluralistic society :D
Jon Hanna wrote: > Keith Gaughan wrote: >> Purity is for schmucks. Unless there's a good reason why PUT shouldn't be >> allowed create resources, the application should accept it. > > Funny, I consider allowing PUT to be the purer form. I'd originally written "PUT/POST purity is for schmucks", but thought it wasn't needed. I don't really think either is more "pure" myself: both have advantages and disadvantages depending on the application and constraints to be satisfied. K.
> > I'd confuse that exactly the opposite way! It must exist to have a state to change? But PUT isn't "change" the state - it's "set" the state. This means a 'non-existing' resource now comes into existence. Another wrinkle with PUT is that for the URI, the client is the decider. And as we all know, a decider is just a tool. > > I found the slides refreshingly clear. I agree that the slides are helpful, this is just a minor point.
On 3/17/07, Benjamin Carlyle <benjamincarlyle@...> wrote: > > > * A train list document has a train list structure > > Ok, a list can be expressed directly in a tree or a graph. > > A graph is rarely the data structure of choice when working with data > that could be stored a list. Graphs carry with them intrinsic > algorithmic complexity that lists side-step. Choosing a more specific > type reduces the cost of developing an application. Choosing a more > general type to convey the same information increases cost. It forces > information consumers to accomodate possible variation in structure that > does not exist in practice and employ more complex algorithms than > necessary. Clearly a directed list is a proper subset of a tree, and a tree is a proper subset of a DAG. Yet, like you say, people jump for the list ahead of tree walking and way ahead of graph manipulation. Why? Some theories -iteration through a list is easier than ahead of recursive tree walking and trying to properly handle a directed graph. -lists are similar to arrays, which are the ubuqituous 'first data structure' -there is crude language support for lists even in todays OO langauges. e.g. the foreach support in C# and Java, STL in C++. -a large swathe of the developer community didnt put in the hours listening to milner go on about concurrency, fourman on temporal reasoning, or any of the AI people showing how prolog can be made to do useful things. For some reason, lists are something everyone is happy with. Then come trees, which are ubiquitous in XML work...either you walk the DOM by hand or you hand off tree walking to the SAX parser and handle the node events. As soon as you turn to XPath, you are into real tree land, which is why its interesting that C# 3.0 and Java (7?) add this. Gradually the mainstream 'enterprise' languages are following the data structures, with about a 5 year lag. Yet if you look at a lot of XML docs, there's enough x-referencing in them to make them somewhat graphy -anchor refs in XHTML -target dependencies (ant, nant, msbuild) -serialized data xrefs (SOAP section 5 encoding) Both DTDs and XSD have support for ID attributes, because giving elements unique IDs -and refererring to them later- was felt to matter So graphs are coming, whether you make it explicit in the language or not. And without explicit support in the language and the libraries, you end up implementing things badly, with aggressively suboptimal graph traversal, code that assumes your graph is a DAG, etc, etc. As for RDF, well, I have to tread carefully else the Jena team give me a hard stare at the coffee machine. I fail to see how you can call it a descriptive language until you can say 'all french films have a sex scene', which is trivial in Prolog has(X,sex_scene):-nationality(X,french),is_a(X,film). yet RDF forces me to list each film one by one with its own URI and declare the fact. Maybe SPARQL or whatever will correct this deficiency. The other problem with RDF is that it is pretty painful to work with in today's languages, the ones that are barely au-fait with lists, let alone trees. But you know what? They don't do XML very well either, which is why the SOAP stack try and hide those nasty Xml things from the SOAP endpoint authors. Perhaps that future evolution of Java/C# will make them more amenable to graph work alongside tree manipulation.... -steve
On 22 Mar 2007, at 21:50, Steve Loughran wrote:
> As for RDF, well, I have to tread carefully else the Jena team give me
> a hard stare at the coffee machine. I fail to see how you can call it
> a descriptive language until you can say 'all french films have a sex
> scene', which is trivial in Prolog
> has(X,sex_scene):-nationality(X,french),is_a(X,film).
> yet RDF forces me to list each film one by one with its own URI and
> declare the fact. Maybe SPARQL or whatever will correct this
> deficiency.
In N3 this is easy.
{ ?f :contains [ a :SexScene ]. } =>
{ ?f a :Film;
:made_in :France .
}
You don't get it in simple RDF/XML, because they are building things
layer by layer.
The first layer is the relational model.
Then you have to add graphs, or be able to quote graphs.
Then you have to add rules.
When people publish rdf it is clearly best to give them a language
that does not contain rules, or quotation mechanisms. People should
first say what they believe, before they go on saying what they
believe others believe. Furthermore rules are even more dangerous to
use. So doing things in a layered way, the left the more complicated
bits for later. But the structure has rock solid foundations.
OWL is a half way between a vocabulary and rules. The vocabulary has
implications that are well understood.
Learn N3 if you want to see where the Semantic Web can go.
Henry
On 22 Mar 2007 14:17:35 -0700, Henry Story <henry.story@...> wrote:
> On 22 Mar 2007, at 21:50, Steve Loughran wrote:
>
> > As for RDF, well, I have to tread carefully else the Jena team give me
> > a hard stare at the coffee machine. I fail to see how you can call it
> > a descriptive language until you can say 'all french films have a sex
> > scene', which is trivial in Prolog
> > has(X,sex_scene):-nationality(X,french),is_a(X,film).
> > yet RDF forces me to list each film one by one with its own URI and
> > declare the fact. Maybe SPARQL or whatever will correct this
> > deficiency.
>
> In N3 this is easy.
>
> { ?f :contains [ a :SexScene ]. } =>
> { ?f a :Film;
> :made_in :France .
> }
Implication is available in RDF schema if you don't mind tweaking the
model a little -
:FrenchFilm rdfs:subClassOf :FrenchThings ;
rdfs:subClassOf :FilmsWithSexScene .
Steve, you mentioned OO languages - objects & their types tend to be
interrelated in diverse ways: class hierarchies, composition etc.
People may not be used to expressing the objects as raw data without
associated behaviour, but the data structures used do get complex (&
graph-shaped) - "lists are something everyone is happy with", sure,
but most people expect more than arrays from their programming
language.
The impedance mismatch between programming languages and data tools is
a problem, if years of clunky object-relational kit is anything to go
on. (Ironically the ability to make simple tables/list from individual
relations has been very handy in a lot of web tools).
But the point of RDF is to describe resources. If anything's worth
talking about, give it a URI. If relationships between things are
worth talking about, give the relationships URIs too. That produces an
generalised entity-relationship model for describing things, with a
level of web-compatibility built in.
Cheers,
Danny.
--
http://dannyayers.com
Steve Loughran wrote: > > As for RDF, well, I have to tread carefully else the Jena team give me > a hard stare at the coffee machine. I fail to see how you can call it > a descriptive language until you can say 'all french films have a sex > scene', which is trivial in Prolog > has(X,sex_scene):-nationality(X,french),is_a(X,film). > yet RDF forces me to list each film one by one with its own URI and > declare the fact. Maybe SPARQL or whatever will correct this > deficiency. RDF doesn't have existential quantifiers (blank nodes are not the same thing) and that's one of the reasons some KR hardhats call it braindead. Possibly OWL can say that, and maybe n3 (tbl seems to talk about Ǝ a lot). As I've spent much of the last year concatenating strings (enterprise braindeath), I could well be off. Check with Jeremy Carroll, he knows this stuff. When Danny says "But the point of RDF is to describe resources", that's true, but not having the power to say things about quantities of resources is highly limiting. We clearly need it. Look how RSS took of as a syntax for collections of web resources. But I think before we jump to graphs, we have a good few more years of dictionary processing to do (eg Atom). cheers Bill
To develop a little what I had said earlier about graphs and bounce of what Danny just said. On 22 Mar 2007, at 23:09, Danny Ayers wrote: > Steve, you mentioned OO languages - objects & their types tend to be > interrelated in diverse ways: class hierarchies, composition etc. > People may not be used to expressing the objects as raw data without > associated behaviour, but the data structures used do get complex (& > graph-shaped) - "lists are something everyone is happy with", sure, > but most people expect more than arrays from their programming > language. Here's a nice diagram showing the relation between RDF classes and instances and Java classes and instances. The so(m)mer framework [1] uses java annotations to do this kind of mapping.
On 23 Mar 2007, at 06:04, Henry Story wrote: > Here's a nice diagram showing the relation between RDF classes and > instances and Java classes and instances. This mailing list is pretty crappy. I noticed it does not store the images sent. So for this reading this list at a later date, the diagram I mentioned is now on https://sommer.dev.java.net front page. Henry
Robert Sayre wrote: > On 19 Mar 2007 18:15:12 -0700, Bill de hOra <bill@...> wrote: >> >> Neither does XML. No browsers support XML namespaces properly; > > This comment is incorrect. There are some browser features that don't > support XML namespaces properly. Firefox's Live Bookmarks feature > would be one (the feed preview screen has no known namespace bugs). Wrong. Playing a sorites game where a browser is nothing more a pile of features is an unconvincing way to prove correctness. I'll stand by that claim, especially when piles of features like firefox2 try to render atom entries with type=xhtml as XHTML no matter when the XHTML ns is placed on the enclosing div, and other piles of features like opera9 and firefox1.5 can't deal with namespace pefixes. cheers Bill
Henry Story wrote:
>
>
> On 22 Mar 2007, at 21:50, Steve Loughran wrote:
>
> > As for RDF, well, I have to tread carefully else the Jena team give me
> > a hard stare at the coffee machine. I fail to see how you can call it
> > a descriptive language until you can say 'all french films have a sex
> > scene', which is trivial in Prolog
> > has(X,sex_scene):-nationality(X,french),is_a(X,film).
> > yet RDF forces me to list each film one by one with its own URI and
> > declare the fact. Maybe SPARQL or whatever will correct this
> > deficiency.
>
> In N3 this is easy.
>
> { ?f :contains [ a :SexScene ]. } =>
> { ?f a :Film;
> :made_in :France .
> }
>
> You don't get it in simple RDF/XML, because they are building things
> layer by layer.
That's like saying you don't get macros in sexprs because they built
Lisp at another layer. N3 is not RDF. Saying that's easy in N3 is
irrelevant to RDF Henry; you know this.
cheers
Bill
On 23 Mar 2007, at 11:49, Bill de hOra wrote:
> Henry Story wrote:
> >
> >
> > On 22 Mar 2007, at 21:50, Steve Loughran wrote:
> >
> > > As for RDF, well, I have to tread carefully else the Jena team
> give me
> > > a hard stare at the coffee machine. I fail to see how you can
> call it
> > > a descriptive language until you can say 'all french films have
> a sex
> > > scene', which is trivial in Prolog
> > > has(X,sex_scene):-nationality(X,french),is_a(X,film).
> > > yet RDF forces me to list each film one by one with its own URI
> and
> > > declare the fact. Maybe SPARQL or whatever will correct this
> > > deficiency.
> >
> > In N3 this is easy.
> >
> > { ?f :contains [ a :SexScene ]. } =>
> > { ?f a :Film;
> > :made_in :France .
> > }
> >
> > You don't get it in simple RDF/XML, because they are building things
> > layer by layer.
>
> That's like saying you don't get macros in sexprs because they built
> Lisp at another layer. N3 is not RDF. Saying that's easy in N3 is
> irrelevant to RDF Henry; you know this.
No, the Semantic Seb stack is work in progress. Things move on and
improve. One rock solid stack is built upon the next. Go and see what
those at the leading edge of the stack are doing to get an idea where
it is going.
The Turtle subset of N3 (ie without graphs and rules) is just another
serialization of RDF.
Graphs have now been adopted as part of SPARQL.
There is a w3c groups working on rules.
Are you going to stay that RDF is incompatible with rules, because
that working group has not yet finished its work?
Clearly N3 is a very good example of how this is possible.
Henry
> cheers
> Bill
>
On 23 Mar 2007 03:47:05 -0700, Bill de hOra <bill@...> wrote: > > Wrong. Humor me on this one... > Playing a sorites game where a browser is nothing more a pile of > features is an unconvincing way to prove correctness. What I did was explain that, yes, browsers actually do support XML namespaces. And yes, there are bugs. > I'll stand by that claim, Well, expat is sitting right there doing its thing, getElementsByClassName works across XUL, XHTML, and SVG with arbitrary namespaces prefixes, and we even bend over to do qnames in content for one horrid w3c spec: <http://lxr.mozilla.org/seamonkey/source/toolkit/components/feeds/test/xml/rfc4287/feed_accessible.xml> > especially when piles of features like firefox2 try to render > atom entries with type=xhtml as XHTML no matter when the XHTML ns is > placed on the enclosing div, That doesn't have anything to do with XML namespaces, afaik. Maybe you could try explaining it again so I understand what you're saying. -- Robert Sayre
Robert Sayre wrote: > On 23 Mar 2007 03:47:05 -0700, Bill de hOra <bill@...> wrote: > > >> especially when piles of features like firefox2 try to render >> atom entries with type=xhtml as XHTML no matter when the XHTML ns is >> placed on the enclosing div, > > That doesn't have anything to do with XML namespaces, afaik. Maybe you > could try explaining it again so I understand what you're saying. Firefox will attempt to render an atom entry that uses type=xhtml. Removing the XHTML namespace declaration stops the rendering attempt. Maybe you can rationalise it again so I understand how that has nothing to do with XML namespaces. cheers Bill
Bill de hOra wrote: > Firefox will attempt to render an atom entry that uses type=xhtml. > Removing the XHTML namespace declaration stops the rendering attempt. > Maybe you can rationalise it again so I understand how that has nothing > to do with XML namespaces. Actually, that sounds at first description like it is using the namespace correctly to work out what its got - it's just got the wrong idea about what to do with it once its got it.
On 23 Mar 2007 09:14:24 -0700, Bill de hOra <bill@...> wrote: > > Firefox will attempt to render an atom entry that uses type=xhtml. > Removing the XHTML namespace declaration stops the rendering attempt. Do you mean removing the declaration or the namespace? It will ignore content that is not enclosed in an XHTML div, a MUST-level requirement of RFC4287. <http://atompub.org/rfc4287.html#rfc.section.3.1.1.3> IIRC, it will render content enclosed in multiple divs, even though that is illegal per spec. The code isn't making any mistakes regarding namspaces afaik. > Maybe you can rationalise it again so I understand how that has nothing > to do with XML namespaces. Here's a demo feed with a bunch of silly namespace gymnastics: <http://people.mozilla.com/~sayrer/2007/03/23/test.atom> Can you demonstrate the bug you're seeing? -- Robert Sayre
Henry Story wrote: > On 23 Mar 2007, at 11:49, Bill de hOra wrote: >> Henry Story wrote: >> > You don't get it in simple RDF/XML, because they are building things >> > layer by layer. >> >> That's like saying you don't get macros in sexprs because they built >> Lisp at another layer. N3 is not RDF. Saying that's easy in N3 is >> irrelevant to RDF Henry; you know this. > No, the Semantic Seb stack is work in progress. "N3 is not RDF." - I find this to be indisputable. Talking about stacks and works in progress isn't technically relevant. > The Turtle subset of N3 (ie without graphs and rules) is just another > serialization of RDF. If so, then the turtle subset of N3 isn't N3, it's RDF. You're making my point. > Are you going to stay that RDF is incompatible with rules, because that > working group has not yet finished its work? Rules, what are they? I'm saying that RDF and N3 are different logical languages (I'm not even sure N3 is defined independently of the cwm code). If you want to say what N3 lets you say, I'd tend "just use" Prolog. cheers Bill
On 23 Mar 2007, at 18:51, Bill de hOra wrote: > Henry Story wrote: > > On 23 Mar 2007, at 11:49, Bill de hOra wrote: > >> Henry Story wrote: > >> > You don't get it in simple RDF/XML, because they are building > things > >> > layer by layer. > >> > >> That's like saying you don't get macros in sexprs because they > built > >> Lisp at another layer. N3 is not RDF. Saying that's easy in N3 is > >> irrelevant to RDF Henry; you know this. > > No, the Semantic Seb stack is work in progress. > > "N3 is not RDF." - I find this to be indisputable. It's also not very interesting. > Talking about stacks > and works in progress isn't technically relevant. What people coming to the Semantic Web are looking for is an idea of where things are going. N3 shows how one can very well build rules on the Semantic Web stack. > > The Turtle subset of N3 (ie without graphs and rules) is just > another > > serialization of RDF. > > If so, then the turtle subset of N3 isn't N3, it's RDF. You're > making my > point. I have made your point, but you lost the overall debate. > > > Are you going to stay that RDF is incompatible with rules, > because that > > working group has not yet finished its work? > > Rules, what are they? I'm saying that RDF and N3 are different logical > languages (I'm not even sure N3 is defined independently of the cwm > code). If you want to say what N3 lets you say, I'd tend "just use" > Prolog. N3 is also defined in N3 see: http://bblfish.net/blog/page5.html > cheers > Bill > >
On Fri, 2007-03-23 at 19:09 +0100, Henry Story wrote:
> I have made your point, but you lost the overall debate.
Steve said, "you can't state $inference_rule in RDF."
You said, "but you can in N3."
Bill said, "but N3 isn't RDF, but something on top of it."
Regardless, this doesn't have much to do with REST. Could this "debate"
move off-list?
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
Josh Sled wrote: > On Fri, 2007-03-23 at 19:09 +0100, Henry Story wrote: >> I have made your point, but you lost the overall debate. > > Steve said, "you can't state $inference_rule in RDF." > You said, "but you can in N3." > Bill said, "but N3 isn't RDF, but something on top of it." Misquoting Bill is not a good way to get this offlist. cheers Bill
Alan Dean wrote: > He regards himself as a "POST purist" and feels that POST should be > used for creation, not PUT. > That's wrong. The message still isn't getting out so one more time: When creating a new resource use A. POST if the server chooses the URL B. PUT if the client chooses the URL It is not as simple as "All creates must be done through POST" or "All creates must be done through PUT". SQL CRUD does not map onto REST in a 1-1 fashion. See http://www.elharo.com/blog/software-development/web-development/2005/12/08/post-vs-put/ -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 3/25/07, Elliotte Harold <elharo@...> wrote: > Alan Dean wrote: > > > He regards himself as a "POST purist" and feels that POST should be > > used for creation, not PUT. > > > > That's wrong. The message still isn't getting out so one more time: > > When creating a new resource use > > A. POST if the server chooses the URL > B. PUT if the client chooses the URL > > It is not as simple as "All creates must be done through POST" or "All > creates must be done through PUT". SQL CRUD does not map onto REST in a > 1-1 fashion. I agree that the CRUD / SQL analogy is incomplete and a mismatch (bear in mind that I was referring to someone else's position in my quote). I personally think in these terms: "When creating a whole resource, use PUT onto the target URL (ie the user-agent chooses the URL) and when creating a subordinate resource (aka append) use POST onto the intermediate URL (ie the server chooses the target URL)." Unfortunately, the Atom PP does not observe this pattern (at least, it doesn't as far as I can see) and I fear that this will lead to a division between 'RESTful' and 'REST-like' implementations that can only confuse the jobbing developer (just look at RSS1.0 (RDF) -vs- RSS2.0. Regards, Alan Dean http://thoughtpad.net/who/alan.dean/
Alan Dean wrote: > Unfortunately, the Atom PP does not observe this pattern (at least, it > doesn't as far as I can see) and I fear that this will lead to a > division between 'RESTful' and 'REST-like' implementations that can > only confuse the jobbing developer (just look at RSS1.0 (RDF) -vs- > RSS2.0. > There are some minor glitches in APP's RESTfulness, but I don;t think this is one of them. APP simply does not allow the client to choose the final URL for a new entry, ever. Consequently it never uses PUT to create a new entry. All entries have their initial URL assigned by the server. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
[ Attachment content not displayed ]
On 25 Mar 2007 13:36:18 -0700, Elliotte Harold <elharo@...> wrote: > That's wrong. The message still isn't getting out so one more time: > > When creating a new resource use > > A. POST if the server chooses the URL > B. PUT if the client chooses the URL Perhaps that would make a good signature line for this list? Its becoming a perma thread. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
On Mar 25, 2007, at 10:51 PM, Alan Dean wrote: > I personally think in these terms: > > "When creating a whole resource, use PUT onto the target URL (ie the > user-agent chooses the URL) and when creating a subordinate resource > (aka append) use POST onto the intermediate URL (ie the server chooses > the target URL)." > That's just one, but not the only distinction, but as you write this is your personal view ... fine with me. > Unfortunately, the Atom PP does not observe this pattern (at least, it > doesn't as far as I can see) and I fear that this will lead to a > division between 'RESTful' and 'REST-like' implementations that can > only confuse the jobbing developer (just look at RSS1.0 (RDF) -vs- > RSS2.0. This seems to create a problem where there really is none. APP doesn't follow *your personal view*, but that doesn't mean it's unRESTful. Creating new resources via POST is perfectly fine, creating them via PUT is fine perfectly OK as well. PUT and POST are defined in RFC 2616. Having just re-read this again, I believe it strongly support Elliotte's POV. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
I'm working on a REST interface for a system that doles out unique
identifiers. For each request, the caller gives the system a
namespace tag, and the system gives back the next number for that
namespace.
So,
--> GET /numbers/FOO/next
1 <--
--> GET /numbers/FOO/next
2 <--
--> GET /numbers/BAR/next
1 <--
That seems really wrong -- GET should be idempotent, and these
requests clearly are not. But, I'm also clearly getting, fetching, a
value.
How would you implement this interface?
A similar question to this was asked regarding throwing Dice:
http://tech.groups.yahoo.com/group/rest-discuss/message/7768
The dice discussion wandered a bit, but the most substantive
suggestion seemed to be:
Use GET
Return a 302 status and a Location: header, representing the
numerical result as a URL
Include the numerical result in the body of the response
Is there any consensus on this model? I'm still concerned about the
use of GET, and the use of the Location: header seems superfluous.
eric.
Applying the razor to this question leads me to ask: why can't you just use POST? For example: POST /numbers/FOO/next => 1 POST /numbers/FOO/next => 2 POST /numbers/BAR/next => 1 -- Nic Ferrier http://www.tapsellferrier.co.uk
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Eric" == Eric Busboom <eric@...> writes:
Eric> I'm working on a REST interface for a system that doles out
Eric> unique identifiers. For each request, the caller gives the
Eric> system a namespace tag, and the system gives back the next
Eric> number for that namespace.
Why not POST? Can return data just fine.
- --
All the best,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your outdated email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGEwyxIyuuaiRyjTYRAuarAJwKoEBOh1jrQmELqFV2ObPnWHxMkQCcDqdV
cK+wR1x1q8sK7Y/klPYCTVw=
=/1Ir
-----END PGP SIGNATURE-----
Nic James Ferrier wrote: > Applying the razor to this question leads me to ask: > > why can't you just use POST? > > For example: > > POST /numbers/FOO/next > => 1 > > POST /numbers/FOO/next > => 2 > > POST /numbers/BAR/next > => 1 That's not really a fetch then, is it? K. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
Keith Gaughan <keith@...> writes: > Nic James Ferrier wrote: > >> Applying the razor to this question leads me to ask: >> >> why can't you just use POST? >> >> For example: >> >> POST /numbers/FOO/next >> => 1 >> >> POST /numbers/FOO/next >> => 2 >> >> POST /numbers/BAR/next >> => 1 > > That's not really a fetch then, is it? Errr.. you still get the data. And you're not doing a fetch, are you? You're doing an implicit update of a sequence. -- Nic Ferrier http://www.tapsellferrier.co.uk
Nic James Ferrier wrote: > Keith Gaughan <keith@...> writes: > >> Nic James Ferrier wrote: >> >>> Applying the razor to this question leads me to ask: >>> >>> why can't you just use POST? >>> >>> For example: >>> >>> POST /numbers/FOO/next >>> => 1 >>> >>> POST /numbers/FOO/next >>> => 2 >>> >>> POST /numbers/BAR/next >>> => 1 >> That's not really a fetch then, is it? > > Errr.. you still get the data. > > And you're not doing a fetch, are you? You're doing an implicit > update of a sequence. Nothing implicit about it. That's an update alright. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
Keith Gaughan wrote: > That's not really a fetch then, is it? Your problem is that what you are doing is not really fetch. Problem solved :) That said, if there was no issue with id's getting lost then I see no problem in GETting it as long as it is clearly marked as not cacheable. Resources can change over time (the fact that time is a factor in REST is often neglected). Conceptually you have a resource that changes so quickly that it will never have the same representation from one GET to the next. The fact that it's the actual GET that causes this change to be apparent is just an implementation detail :) That's a tad facetious perhaps, but consider the case where each time the GET is done it calls into a UUID algorithm - you've no side-effects from the GET, but you're still getting a different ID each time. Updating a sequence based on when a GET is done has the same effects. As far as the bytes on the wire are concerned, both just give you a different response each time. If however the way in which the GET causes changes is important (e.g. it's important that items in the sequence aren't lost) then the nature of the side-effect becomes important, and GET ceases to be appropriate.
On 04 Apr 2007 02:10:28 -0700, Nic James Ferrier <nferrier@...> wrote: [snip] > > > > That's not really a fetch then, is it? > > Errr.. you still get the data. > > And you're not doing a fetch, are you? You're doing an implicit > update of a sequence. Often, I find myself in agreement with Nic - but not this time (from memory I think that we disagreed on the 'dice' thread too). As I see it, the problem with using POST is that it assumes the the user-agent knows the server internals (i.e. that the server will generate a new unique id per request). In reality, this might not be the case. For example, the server implementation might be to spawn a million new id's once per month and error if they run out. This is an unlikely example, but illustrates the point that the server internals are just that - internal. What the user-agent is requesting is "please get me the next unique id", and therefore (I think) ought to be a GET: --> GET /list/next <-- 307 Temporary Redirect Location: /list/abc123 This way, the user-agent need have no knowledge of the server internals. Further, this also leaves open appropriate user-agent driven POST and PUT actions: --> GET /list/abc123 Accept: text/plain <-- 200 OK Content-Type: text/plain abc123 --> PUT /list/xyz789 Content-Type: text/plain If-Match: * xyz789 <-- 201 Created --> PUT /list/xyz789 Content-Type: text/plain If-Match: * xyz789 <-- 412 Precondition Failed --> POST /list Content-Type: text/plain def123 <-- 303 See Other Location: /list/def123 Regards, Alan Dean http://thoughtpad.net/who/alan-dean/
Jon Hanna wrote: > If however the way in which the GET causes changes is important (e.g. > it's important that items in the sequence aren't lost) then the nature > of the side-effect becomes important, and GET ceases to be appropriate. An example of a case where GET is arguably not appropriate is "page hit counters". Unfortunately page hit counters can only really be done by GET (unless you want to get fancy with AJAX or similar), so the way to not do it is the only way to do it. Fortunately since they've never looked good, the only positive thing page hit counters do is make us nostalgic for how the web looked in the early nineties :)
Jon Hanna wrote: > Jon Hanna wrote: >> If however the way in which the GET causes changes is important (e.g. >> it's important that items in the sequence aren't lost) then the nature >> of the side-effect becomes important, and GET ceases to be appropriate. > > An example of a case where GET is arguably not appropriate is "page hit > counters". It's only inappropriate if you assume that these things are accurate. After all, idempotency doesn't rule out side-effects completely, it just means that you can't really depend on those side-effects and those side-effects can't cause the system to behave differently. K. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
Jon Hanna wrote: > Keith Gaughan wrote: >> That's not really a fetch then, is it? > > Your problem is that what you are doing is not really fetch. Problem > solved :) > > That said, if there was no issue with id's getting lost then I see no > problem in GETting it as long as it is clearly marked as not cacheable. > > Resources can change over time (the fact that time is a factor in REST > is often neglected). Conceptually you have a resource that changes so > quickly that it will never have the same representation from one GET to > the next. The difference being that a GET can't cause those resources to change in any significant way, but that's not to say that they can't change due to some other stimulus, be it internal or external. > The fact that it's the actual GET that causes this change to be apparent > is just an implementation detail :) > > That's a tad facetious perhaps, but consider the case where each time > the GET is done it calls into a UUID algorithm - you've no side-effects > from the GET, but you're still getting a different ID each time. > Updating a sequence based on when a GET is done has the same effects. As > far as the bytes on the wire are concerned, both just give you a > different response each time. But if the resource represents a counter, a GET updates that counter, and you really, really care about each request giving a unique value, you're in trouble. The counter increment, in that context, becomes a non-trivial side-effect. > If however the way in which the GET causes changes is important (e.g. > it's important that items in the sequence aren't lost) then the nature > of the side-effect becomes important, and GET ceases to be appropriate. Bingo! -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
If you are actually creating identifiers, you could view that as creating a new resource, with an URI. So that seems to be a good fit for: --> POST /numbers/FOO <-- 201 Created. Location: /numbers/FOO/1 --> POST /numbers/FOO <-- 201 Created. Location: /numbers/FOO/2 --> POST /numbers/BAR <-- 201 Created. Location: /numbers/BAR/1 Note that the 'next' part of the request resource url is not used, just the namespace/collection The dice throwing example is not really equivalent, as dice results are not unique between requests. /vidar Den 4. apr. 2007 kl. 03:51 skrev Eric Busboom: > > I'm working on a REST interface for a system that doles out unique > identifiers. For each request, the caller gives the system a > namespace tag, and the system gives back the next number for that > namespace. > > So, > > --> GET /numbers/FOO/next > 1 <-- > > --> GET /numbers/FOO/next > 2 <-- > > --> GET /numbers/BAR/next > 1 <-- > > That seems really wrong -- GET should be idempotent, and these > requests clearly are not. But, I'm also clearly getting, fetching, a > value. > > How would you implement this interface? > > A similar question to this was asked regarding throwing Dice: > http://tech.groups.yahoo.com/group/rest-discuss/message/7768 > > The dice discussion wandered a bit, but the most substantive > suggestion seemed to be: > Use GET > Return a 302 status and a Location: header, representing the > numerical result as a URL > Include the numerical result in the body of the response > > Is there any consensus on this model? I'm still concerned about the > use of GET, and the use of the Location: header seems superfluous. > > eric. > > >
On Wed, 2007-04-04 at 10:41 +0100, Alan Dean wrote:
> As I see it, the problem with using POST is that it assumes the the
> user-agent knows the server internals (i.e. that the server will
> generate a new unique id per request). In reality, this might not be
> the case. For example, the server implementation might be to spawn a
> million new id's once per month and error if they run out. This is an
> unlikely example, but illustrates the point that the server internals
> are just that - internal.
>
> What the user-agent is requesting is "please get me the next unique
> id", and therefore (I think) ought to be a GET:
Neither GET nor POST imply any "unique id" or "sequence number"
internals.
I'd believe something in a media type would inform the client that the
server supports that. That same declaration would probably specify
either GET or POST, and perhaps some post-only-once or
"reliable-POST/GET-sequence" protocol depending on requirements
regarding intermediaries and the data itself.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
On 4/4/07, Vidar Larsen <vi_larsen@...> wrote: > > If you are actually creating identifiers, you could view that as creating a new resource, with an URI. > So that seems to be a good fit for: > > > --> POST /numbers/FOO > <-- 201 Created. Location: /numbers/FOO/1 > --> POST /numbers/FOO > <-- 201 Created. Location: /numbers/FOO/2 > --> POST /numbers/BAR > <-- 201 Created. Location: /numbers/BAR/1 > > > Note that the 'next' part of the request resource url is not used, just the namespace/collection Part of my concern about the POST-centric approach is that it is implicit in all the proposals in favour that the POST request has no body (please correct me if I am wrong). According to http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5 "The POST method is used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI in the Request-Line." So the spec (implicitly) expects a POST to carry a entity body. What should the body be for the following example? --> POST /numbers/FOO <-- 201 Created. Location: /numbers/FOO/1 Perhaps POST /numbers/FOO Content-Type: application/x-www-form-urlencoded action=new The advantage of this is that it makes the operation explicit, the disadvantage is that it smells awfully like "tunnelling via POST" (as does the 'empty POST' approach but it is less obvious there). Regards, Alan Dean http://thoughtpad.net/who/alan-dean/
Alan Dean wrote: > According to http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5 > "The POST method is used to request that the origin server accept the > entity enclosed in the request as a new subordinate of the resource > identified by the Request-URI in the Request-Line." > > So the spec (implicitly) expects a POST to carry a entity body. What > should the body be for the following example? I see nothing to necessarily disallow a null entity.
On 04 Apr 2007 06:44:07 -0700, Jon Hanna <jon@...> wrote: > > Alan Dean wrote: > > According to http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5 > > "The POST method is used to request that the origin server accept the > > entity enclosed in the request as a new subordinate of the resource > > identified by the Request-URI in the Request-Line." > > > > So the spec (implicitly) expects a POST to carry a entity body. What > > should the body be for the following example? > > I see nothing to necessarily disallow a null entity. You are correct, of course, but that wasn't really what I was driving at. An (explicit) empty POST can be regarded as simply shorthand for a non-empty POST. I gave an example of an equivalent POST and commented that it smelt rather like tunnelling, which was the point I was trying to make. Alan
My 2 cents, The resource you are GETting at /numbers/FOO/next can be described as "the next number in the sequence FOO". Thus the GET is idempotent as it always refers to the next number in the sequence, and the design you mentioned is fine. You do of course have an un-cachable resource representation since the contents of the representation change with every request, but the resource itself always remains the same, the value of the next number in the sequence. Paul. Eric Busboom wrote: > > > I'm working on a REST interface for a system that doles out unique > identifiers. For each request, the caller gives the system a > namespace tag, and the system gives back the next number for that > namespace. > > So, > > --> GET /numbers/FOO/next > 1 <-- > > --> GET /numbers/FOO/next > 2 <-- > > --> GET /numbers/BAR/next > 1 <-- > > That seems really wrong -- GET should be idempotent, and these > requests clearly are not. But, I'm also clearly getting, fetching, a > value. > > How would you implement this interface? > > A similar question to this was asked regarding throwing Dice: > http://tech.groups.yahoo.com/group/rest-discuss/message/7768 > <http://tech.groups.yahoo.com/group/rest-discuss/message/7768> > > The dice discussion wandered a bit, but the most substantive > suggestion seemed to be: > Use GET > Return a 302 status and a Location: header, representing the > numerical result as a URL > Include the numerical result in the body of the response > > Is there any consensus on this model? I'm still concerned about the > use of GET, and the use of the Location: header seems superfluous. > > eric. > >
Alan Dean wrote: > What the user-agent is requesting is "please get me the next unique > id", and therefore (I think) ought to be a GET: > > --> > GET /list/next This is similar to design issues around editing and (especially) mapping mom queues onto HTTP. The problem with GET here is multiple clients, and caches. Etags you can use to solve the edit case. With queues, it's trickier especially if you want to support browsing and popping. I would be inclined to use POST in this case; it saves me having to write cache busting headers and ensures each client gets its own ID (which I assume is needed). It's also more explicit in terms of design. > > <-- > 307 Temporary Redirect > Location: /list/abc123 This too is an option, but you'll still need cache busting (I think) with GET. cheers Bill
Bill de hOra <bill@...> writes: > Alan Dean wrote: > >> What the user-agent is requesting is "please get me the next unique >> id", and therefore (I think) ought to be a GET: >> >> --> >> GET /list/next > > This is similar to design issues around editing and (especially) mapping > mom queues onto HTTP. The problem with GET here is multiple clients, and > caches. Etags you can use to solve the edit case. With queues, it's > trickier especially if you want to support browsing and popping. I would > be inclined to use POST in this case; it saves me having to write cache > busting headers and ensures each client gets its own ID (which I assume > is needed). It's also more explicit in terms of design. Spot on. It's about pragmatism guys. And the pirate code, obviously. -- Nic Ferrier http://www.tapsellferrier.co.uk
Paul James wrote: > My 2 cents, > > The resource you are GETting at /numbers/FOO/next can be described as > "the next number in the sequence FOO". Thus the GET is idempotent as it > always refers to the next number in the sequence, and the design you > mentioned is fine. > But the notion of "next" implies state. I'm not sure whether you're proposing this state exists on the client or the server. If the latter, though, I think this would change the result for all clients, and that would be bad. It should be POST. Is this a random sequence or a predetermined sequence? If random, I could see GET; but if it's a predetermined sequence I think POST is required. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On Apr 8, 2007, at 12:01 PM, Elliotte Harold wrote: > Is this a random sequence or a predetermined sequence? If random, I > could see GET; but if it's a predetermined sequence I think POST is > required. > Here is the interface I finally settled on: POST /numbers/FOO/nextInc returns the next number in the sequence and increments it. GET /numbers/FOO/next returns the next number in the sequence but does not increment it. The system has use cases for the non-incrementing GET, so the interface isn't gratuitous, but I didn't notice them at the time of the first posting. It seems a bit cheap to suppose that what is essentially a name change would resolve the question, but the "nextInc" name does imply state change, thus POST, and the "next" interface is clearly just a GET. I'm not fond of any solution that involves Location: headers or the like, because I want these interfaces to be usable as simply as possible, such as with wget and bash. Thanks for all of the responses. eric.
* Alan Dean <alan.dean@...> [2007-04-04 16:00]: > I gave an example of an equivalent POST and commented that it > smelt rather like tunnelling, which was the point I was trying > to make. I can see your point; it would seem to indicate that what Eric really wants is a non-idempotent verb of his own devising, maybe `ADVANCE`. Pragmatically, though, I’m not sure that doing it neatly confers any benefits, unlike when using `POST` to tunnel idempotent methods such as `PUT` or `DELETE`. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
--- In rest-discuss@yahoogroups.com, "A. Pagaltzis" <pagaltzis@...> > I can see your point; it would seem to indicate that what Eric > really wants is a non-idempotent verb of his own devising, maybe > `ADVANCE`. I'd briefly thought of that, but would much prefer not to create new verbs, primarily because the REST model involves a limited set of verbs. I'd think that creating verbs without extensive deliberation and reasoning would be a worse abuse of the model than tunneling. On the other hand, I think the very fact that we have long discussions about these topics indicates that any real world system is going to break the REST model somewhere, and we're just looking for the least ugly compromise. eric.
* ericbusboom <eric@...> [2007-04-10 04:55]: > I'd briefly thought of that, but would much prefer not to > create new verbs, primarily because the REST model involves a > limited set of verbs. I think it involves a uniform set of verbs; having a limited set is a means, not an end. > I'd think that creating verbs without extensive deliberation > and reasoning would be a worse abuse of the model than > tunneling. Is verbing nouns as you did better than making new verbs? In terms of abstract REST, I’d say definitely. > On the other hand, I think the very fact that we have long > discussions about these topics indicates that any real world > system is going to break the REST model somewhere, and we're > just looking for the least ugly compromise. Remember that REST is not the same as HTTP, and realworld HTTP is even further removed from purity. Mapping REST to HTTP is what causes the sort of metaphor shear that this group often has to discuss, I think. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Over in Apache Jakarta land, it looks like they are wrapping up the commons projects on the grounds that most of the code was never maintained; its just support calls instead of community. which leaves, well, it leaves Http Components, core code for anyone doing HTTP work, because what few maintainers there are, are (a) more than Sun has on the job and (b) have actually read the http spec instead of basing their implementation on the notes a product manager in mountain view made about the spec in 1997. They are discussing what to do next: http://www.mail-archive.com/httpcomponents-dev@.../msg01155.html Placing it in the WS project would give it the official home it needs, and tie it in with those SOAP stack that use it, but do run a risk that it will get pushed towards SOAPy needs and not RESTy ones. Certainly its a better outcome than orphanage. Users of the framework may wish to get actively involved on the http commons mailing list *now*, rather than be surprised when change happens later. -steve
[ Attachment content not displayed ]
On 4/11/07, Mike Dierken <dierken@...> wrote: > Thanks for the heads up! > I have a strong interest in both client and server components, but haven't > followed at all the project organization. I do use the commons http client > (just downloaded and integrated the v3.1 rc1 library this afternoon) > So it seems that there are two projects dealing with an HTTP Client - one in > 'commons' and one in 'jakarta'. The 'jakarta' one (v4) is intended to take > over for the 'commons' ( v3.1). I assume this issue affects both of those > code bases. I think so. > > Providing Java a very standards compliant as well as full featured > components for HTTP from the Open Source community is very important but I > don't know that the particular hosting project - whether it's the Apache > 'Web Service' project or something else - is that important. I'm suprised > that the amount of activity is a factor at all - support for HTTP should be > relatively stable. Maybe some work on alternate approaches (non-blocking IO > for example) being the kind of activity I would expect. heh, take a look at how much Java5 and 6 have adapted to support cookies and proxies out the box -and note that the automatic proxy stuff is broken on linux, and works well enough on windows to stop oracle JDBC drivers working (which tells you about JDBC over HTTP). There's work to be done there, still. > > I can chime in on the discussion over there, but without a suggested landing > spot for the code I'm not sure what value I could add. It can certainly go in to Apache WS, who will be glad for it, but it would clearly benefit from more grateful users on the mailing list -steve
Some resource algebra... Suppose I have two resources X and Y where X is some arbitrary resource and Y is a resource that is a count of the number of GET requests processed by resource X. So, if I do a GET on Y, a get back a count of the number of GET requests processed by resource X. With respect to a RESTful system, this seems like a reasonable thing to do. Suppose X = Y (replace all occurrences of X with Y). We can then define Y as being a resource that is a count of the number of GET requests processed by Y. Is this usage no longer RESTful? --Chuck On 4/8/07, Elliotte Harold <elharo@...> wrote: > Paul James wrote: > > My 2 cents, > > > > The resource you are GETting at /numbers/FOO/next can be described as > > "the next number in the sequence FOO". Thus the GET is idempotent as it > > always refers to the next number in the sequence, and the design you > > mentioned is fine. > > > > But the notion of "next" implies state. I'm not sure whether you're > proposing this state exists on the client or the server. If the latter, > though, I think this would change the result for all clients, and that > would be bad. It should be POST. > > Is this a random sequence or a predetermined sequence? If random, I > could see GET; but if it's a predetermined sequence I think POST is > required. > > -- > Elliotte Rusty Harold elharo@... > Java I/O 2nd Edition Just Published! > http://www.cafeaulait.org/books/javaio2/ > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ > > > > Yahoo! Groups Links > > > >
"Chuck Hinson" <chuck.hinson@...> writes: > Some resource algebra... > > Suppose I have two resources X and Y where X is some arbitrary > resource and Y is a resource that is a count of the number of GET > requests processed by resource X. > > So, if I do a GET on Y, a get back a count of the number of GET > requests processed by resource X. > > With respect to a RESTful system, this seems like a reasonable thing to do. > > Suppose X = Y (replace all occurrences of X with Y). We can then > define Y as being a resource that is a count of the number of GET > requests processed by Y. > > Is this usage no longer RESTful? It's certainly not very cacheable. -- Nic Ferrier http://www.tapsellferrier.co.uk
Chuck Hinson wrote: > Some resource algebra... > > Suppose I have two resources X and Y where X is some arbitrary > resource and Y is a resource that is a count of the number of GET > requests processed by resource X. > > So, if I do a GET on Y, a get back a count of the number of GET > requests processed by resource X. > > With respect to a RESTful system, this seems like a reasonable thing to do. > > Suppose X = Y (replace all occurrences of X with Y). We can then > define Y as being a resource that is a count of the number of GET > requests processed by Y. > > Is this usage no longer RESTful? As long as you don't depend on Y returning any particular value, sure it's RESTful, just like when you were using two separate resources above. The value returned by Y can only be used as a (very rough) approximation of how many GETs were made. But frankly, you're better off treating GETs like pure functions. K. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
Are usages RESTful, or are applications RESTful, as a shorthand way of saying that they adhere to the constraints of the style? If the latter, which constraint or constraints might not be followed in your example? Walden ----- Original Message ----- From: Keith Gaughan To: rest-discuss@yahoogroups.com Sent: Thursday, April 12, 2007 3:11 AM Subject: Re: [rest-discuss] Interface for non-idempotent fetch? Chuck Hinson wrote: > Some resource algebra... > > Suppose I have two resources X and Y where X is some arbitrary > resource and Y is a resource that is a count of the number of GET > requests processed by resource X. > > So, if I do a GET on Y, a get back a count of the number of GET > requests processed by resource X. > > With respect to a RESTful system, this seems like a reasonable thing to do. > > Suppose X = Y (replace all occurrences of X with Y). We can then > define Y as being a resource that is a count of the number of GET > requests processed by Y. > > Is this usage no longer RESTful? As long as you don't depend on Y returning any particular value, sure it's RESTful, just like when you were using two separate resources above. The value returned by Y can only be used as a (very rough) approximation of how many GETs were made. But frankly, you're better off treating GETs like pure functions. K. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845 __________ NOD32 2182 (20070411) Information __________ This message was checked by NOD32 antivirus system. http://www.eset.com
[ Attachment content not displayed ]
"Chuck Hinson" <chuck.hinson@...> writes: > What I've read earlier in this thread leads me to believe that some people > believe that the second construction in the example would not be RESTful. > By my (likely inappropriate) algebra, it seems to me that the second > construction would be RESTful, but I really don't know and was hoping > someone could explain. It's not black and white like that. One could say that such an app was "not very RESTfull" because it had low cacheability for no particular gain (internal, single machine performance?) But one resource doesn't really REST make. So to talk about the relative RESTfull-ness of it seems pretty odd to me anyway. Plus, it's not black and white like that /8-> -- Nic Ferrier http://www.tapsellferrier.co.uk
Hiya, On 12 Apr 2007 18:29:25 -0700, Chuck Hinson <chuck.hinson@...> wrote: > It's not clear to me whether or not either of the cases in my example violate the > constraints of the style. Is Sergio Leone's "For a fistful of dollars" a western movie? It's a style, not a set of laws, so you're free to interpret within the bounds that's not made explicit within the REST definition, and I believe this is such a case. There's nothing wrong with having something happen (in the background) when a GET is invoked; imdepontance breaks if the change the state of the resource itself, not if it changes the state of something else. Imdepotance is important within the context of the transaction, but probably doesn't matter so much outside that context. Umm, IMHO, of course. Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
REST doesn't require representations to be cacheable; it just requires that the server indicate which data can be cached and which should not. So I don't think there is a basis for even saying "not very RESTful". Walden ----- Original Message ----- From: Nic James Ferrier To: Chuck Hinson Cc: rest-discuss@yahoogroups.com Sent: Thursday, April 12, 2007 8:40 PM Subject: Re: [rest-discuss] Interface for non-idempotent fetch? "Chuck Hinson" <chuck.hinson@...> writes: > What I've read earlier in this thread leads me to believe that some people > believe that the second construction in the example would not be RESTful. > By my (likely inappropriate) algebra, it seems to me that the second > construction would be RESTful, but I really don't know and was hoping > someone could explain. It's not black and white like that. One could say that such an app was "not very RESTfull" because it had low cacheability for no particular gain (internal, single machine performance?) But one resource doesn't really REST make. So to talk about the relative RESTfull-ness of it seems pretty odd to me anyway. Plus, it's not black and white like that /8-> -- Nic Ferrier http://www.tapsellferrier.co.uk __________ NOD32 2185 (20070412) Information __________ This message was checked by NOD32 antivirus system. http://www.eset.com
REST is not about idempotence at all, so the question of whether GETs in this scenario have intended side effects or not don't help to understand if the design is RESTful. Walden ----- Original Message ----- From: Alexander Johannesen Cc: rest-discuss@yahoogroups.com Sent: Thursday, April 12, 2007 9:20 PM Subject: Re: [rest-discuss] Interface for non-idempotent fetch? Hiya, On 12 Apr 2007 18:29:25 -0700, Chuck Hinson <chuck.hinson@...> wrote: > It's not clear to me whether or not either of the cases in my example violate the > constraints of the style. Is Sergio Leone's "For a fistful of dollars" a western movie? It's a style, not a set of laws, so you're free to interpret within the bounds that's not made explicit within the REST definition, and I believe this is such a case. There's nothing wrong with having something happen (in the background) when a GET is invoked; imdepontance breaks if the change the state of the resource itself, not if it changes the state of something else. Imdepotance is important within the context of the transaction, but probably doesn't matter so much outside that context. Umm, IMHO, of course. Alex -- ---------------------------------------------------------- Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ -------- __________ NOD32 2185 (20070412) Information __________ This message was checked by NOD32 antivirus system. http://www.eset.com
On Apr 13, 2007, at 4:20 AM, Alexander Johannesen wrote: > Is Sergio Leone's "For a fistful of dollars" a western movie? That's a great analogy! Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Chuck Hinson wrote: > On 4/12/07, Keith Gaughan <keith@...> wrote: >> Chuck Hinson wrote: >> >> > Some resource algebra... >> > >> > Suppose I have two resources X and Y where X is some arbitrary >> > resource and Y is a resource that is a count of the number of GET >> > requests processed by resource X. >> > >> > So, if I do a GET on Y, a get back a count of the number of GET >> > requests processed by resource X. >> > >> > With respect to a RESTful system, this seems like a reasonable thing >> to do. >> > >> > Suppose X = Y (replace all occurrences of X with Y). We can then >> > define Y as being a resource that is a count of the number of GET >> > requests processed by Y. >> > >> > Is this usage no longer RESTful? >> >> As long as you don't depend on Y returning any particular value, sure >> it's >> RESTful, just like when you were using two separate resources above. The >> value returned by Y can only be used as a (very rough) approximation >> of how >> many GETs were made. >> >> But frankly, you're better off treating GETs like pure functions. >> > > What does that mean and why are you better off that way? My original answer was a bit smart-arsed, so here's what I really meant: Neither your original example or your second example were particularly RESTful because, in the context of HTTP, one of the constraints put upon GET is that it is idempotent and therefore a GET has _no significant_ side effects on the resource. That's not to say that performing a GET on a resource _can't_ have side-effects, but those side-effects have to be such that if they went away the application's functionality would not be changed. That's one of the primary REST constraints in HTTP. Of course, there are big honking counterexamples on the web, the main one being hit counters. However, the way that hit counters use GET in a way that has significant side-effects is a hack around the fact that you can't specify that you want to fetch an image resource with POST, which would be much more appropriate. If your example, both instances are RESTful if the incrementing is, say, a second order side-effect such as profiling, i.e., one which if removed doesn't does not affect the functionality of the application. However, if in both cases the side-effect is significant and affects the functionality of the application, it ain't RESTful. K. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
On 4/11/07, Chuck Hinson <chuck.hinson@...> wrote: > Some resource algebra... > > Suppose I have two resources X and Y where X is some arbitrary > resource and Y is a resource that is a count of the number of GET > requests processed by resource X. > > So, if I do a GET on Y, a get back a count of the number of GET > requests processed by resource X. > > With respect to a RESTful system, this seems like a reasonable thing to do. > > Suppose X = Y (replace all occurrences of X with Y). We can then > define Y as being a resource that is a count of the number of GET > requests processed by Y. > > Is this usage no longer RESTful? No, it's RESTful. It's no different than giving your Web server log file a URI. GET is safe because it's defined to be safe. The server can do whatever it wants in response to receiving a GET message, but the important thing is that the both parties (and intermediaries) understand that the client isn't *asking* for unsafe stuff to happen and so can't be held accountable. Mark.
For those of us unfamiliar with the movie or the genre, why is this a great analogy? What does it illustrate? --Chuck On 4/13/07, Stefan Tilkov <stefan.tilkov@...> wrote: > On Apr 13, 2007, at 4:20 AM, Alexander Johannesen wrote: > > > Is Sergio Leone's "For a fistful of dollars" a western movie? > > That's a great analogy! > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > Yahoo! Groups Links > > > >
On 13-Apr-07, at 9:10 AM, Chuck Hinson wrote: > For those of us unfamiliar with the movie or the genre, why is this a > great analogy? What does it illustrate? That things are not always black and white.* It's a perfect analogy. --Toby * Sometimes they're Technicolor (bada-bing) > > --Chuck > > On 4/13/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > On Apr 13, 2007, at 4:20 AM, Alexander Johannesen wrote: > > > > > Is Sergio Leone's "For a fistful of dollars" a western movie? > > > > That's a great analogy! > > > > Stefan > > -- > > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > > > > > Yahoo! Groups Links > > > > > > > > > >
Toby Thain <toby@...> writes: > On 13-Apr-07, at 9:10 AM, Chuck Hinson wrote: > >> For those of us unfamiliar with the movie or the genre, why is this a >> great analogy? What does it illustrate? > > That things are not always black and white.* <sings>only shades of gray...</sings> (the monkees). -- Nic Ferrier http://www.tapsellferrier.co.uk
Chuck Hinson wrote: > For those of us unfamiliar with the movie or the genre, why is this a > great analogy? What does it illustrate? Sergio Leone's films were set in the same period of American History as other Westerns, but broke with so many of the expectations of that genre as to not fit it. They are normally considered Westerns, but do not follow the rules of Westerns.
On 13 Apr 2007 07:08:32 -0700, Jon Hanna <jon@...> wrote: > > Chuck Hinson wrote: > > For those of us unfamiliar with the movie or the genre, why is this a > > great analogy? What does it illustrate? > > Sergio Leone's films were set in the same period of American History as > other Westerns, but broke with so many of the expectations of that genre > as to not fit it. > > They are normally considered Westerns, but do not follow the rules of > Westerns. Strictly speaking, they are regarded as 'spaghetti westerns' - a subgenre, see: http://en.wikipedia.org/wiki/Spaghetti_Western http://en.wikipedia.org/wiki/Sergio_Leone Hooking back to REST, I think that this is analagous to the hi-REST / lo-REST subdivision. Alan Dean http://thoughtpad.net/who/alan-dean/
"Alan Dean" <alan.dean@...> writes: > Hooking back to REST, I think that this is analagous to the hi-REST / > lo-REST subdivision. I acnowledge no such heirarchy. I've never heard Roy endorse that view either. REST is an architectural style. There might be a list of tick boxes so you can say something has relative RESTfullness... but I don't believe in pure or hi or lesser or lo. -- Nic Ferrier http://www.tapsellferrier.co.uk
Keith Gaughan wrote: > That's not to say that performing a GET on a resource _can't_ have > side-effects, but those side-effects have to be such that if they went away > the application's functionality would not be changed. That's one of the > primary REST constraints in HTTP. Not quite; that treads close to implementation dependence. GET safety is about accountability. You can't stop server owners doing stupid things in their implementations, but you can assume it won't be your fault when they do. cheers Bill
* Nic James Ferrier <nferrier@...> [2007-04-12 05:45]: > "Chuck Hinson" <chuck.hinson@...> writes: > > Suppose X = Y (replace all occurrences of X with Y). We can > > then define Y as being a resource that is a count of the > > number of GET requests processed by Y. > > > > Is this usage no longer RESTful? > > It's certainly not very cacheable. This seems like a useless point to me. Am I missing something? I cannot conceive of any arrangement of resources in which the GET count would benefit from resources, regardless of whether X == Y or X =/= Y. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
"A. Pagaltzis" <pagaltzis@...> writes: > This seems like a useless point to me. Am I missing something? > I cannot conceive of any arrangement of resources in which the > GET count would benefit from resources, regardless of whether > X == Y or X =/= Y. I think Chuck was just trying to push to the edge the reasoning about what is "RESTfull" with a resource modifying itself. The example he comes up with of course is a really common one. It's a sequence basically as used for web counting. Personally, I don't think it's very RESTfull, but I don't really have a problem with it as a web counter either. REST, to me, is about scalability and I think I could scale a webcounter built like this without having to pay a fortune in hardware (compared to hit rate that is). -- Nic Ferrier http://www.tapsellferrier.co.uk
Nic James Ferrier wrote: > > > "Alan Dean" <alan.dean@... <mailto:alan.dean%40gmail.com>> writes: > > > Hooking back to REST, I think that this is analagous to the hi-REST / > > lo-REST subdivision. > > I acnowledge no such heirarchy. I've never heard Roy endorse that view > either. > > REST is an architectural style. If you follow where the notion of a "style" comes from (the arts), then hi and lo analogy would refer to periods of use, not levels of use. Levels are what Don was referring to. For example, when the industry gets their hands on REST, that will be the beginning of the end of hi rest and the start of the beginning of mannerist REST. Here's another problem with the notion of software "style". Architectural styles go in and out of fashion, and aren't necessarily appropriate (for example the greeks building temples in a style better suited to wood that stone). None of this stuff from the physical world carries over very well, I'm afraid. cheers Bill
On Apr 15, 2007, at 4:45 AM, Bill de hOra wrote:
> > REST is an architectural style.
>
> If you follow where the notion of a "style" comes from (the arts),
> then
> hi and lo analogy would refer to periods of use, not levels of use.
> Levels are what Don was referring to.
>
> For example, when the industry gets their hands on REST, that will be
> the beginning of the end of hi rest and the start of the beginning of
> mannerist REST.
>
> Here's another problem with the notion of software "style".
> Architectural styles go in and out of fashion, and aren't necessarily
> appropriate (for example the greeks building temples in a style better
> suited to wood that stone). None of this stuff from the physical world
> carries over very well, I'm afraid.
There are two types of Architecture in the real world of buildings:
1) the artistic shells that a bunch of critics talk about as yet
another way of demonstrating their linguistic flexibility
without actually having a clue about buildings;
2) the design for construction of a building in order to fit its
long-term purpose while occasionally transcending it in other
dimensions.
Amongst actual successful architects (not the critics who write about
them), styles are an important organizing method based (usually) on
the materials available in a given location. Those styles are all
over the place. Most people, however, only know about the monumental
styles (the outlandish ones used to construct churches, temples, and
civic structures that are meant to be more visually imposing than
truly functional).
Only one in a million architects reach the stage of a Gehry, who
can devote his focus entirely on his artistic style (while his
crowd of apprentices try to make the building work).
REST, as styles go, is more like the architecture of barnyards and
office suites. The notion that there is anything lo or hi about it
demonstrates a fundamental lack of understanding -- it's like saying
"That is a good barn, and I soooo like how the 3ft-wide main door
is slimming and artistic."
There is no gray area when a constraint is violated, even if the
pain is not immediately apparent. The purpose of the architectural
style is to let us know what fits, and what doesn't, before we try
to lead a horse through the door.
....Roy
Roy T. Fielding wrote: > There are two types of Architecture in the real world of buildings: > > 1) the artistic shells that a bunch of critics talk about as yet > another way of demonstrating their linguistic flexibility > without actually having a clue about buildings; Wolfe. > 2) the design for construction of a building in order to fit its > long-term purpose while occasionally transcending it in other > dimensions. Vitruvius. > Amongst actual successful architects (not the critics who write about > them), styles are an important organizing method based (usually) on > the materials available in a given location. Those styles are all > over the place. Most people, however, only know about the monumental > styles (the outlandish ones used to construct churches, temples, and > civic structures that are meant to be more visually imposing than > truly functional). I still think the carry over is troublesome. Aside from hokey analogies, the role of architect is as much an end run around the social jostling amongst sub-disciplines for pre-eminence in software systems; and it leads to damaging notions such as "implementation detail" implying "trivial" instead of, well, implementation detail. cheers Bill
NOELIOS CONSULTING today announced the final 1.0 version of its Noelios Restlet Engine (NRE), the reference implementation of the Restlet API 1.0. The Restlet open source project was launched at the end of 2005 and was the first REST framework for Java. Since its launch, it has attracted an active and quickly growing community of users. With more than sixty different contributors and two core developers, the project went through an intense and fruitful collaborative design. Several applications are already deployed in production within organizations of various sizes, including Overstock.com, an Internet leader for brand names at clearance prices. The Restlet project is also used as a support technology for various software architecture classes covering the REST architecture style, for example at University of California Irvine, or at the INSA Rouen engineering school. NOELIOS, as the founder and leader of the project is now offering a complete professional support, including yearly subscription plans and a per-incident plan, with prices ranging from 350 � to 2850 �. It also offers some expert consulting services on Restlet and connected technologies such as Java, XML and REST. Changes log: http://www.restlet.org/documentation/1.0/changes Download links: http://www.restlet.org/downloads/1.0/restlet-1.0.0.exe http://www.restlet.org/downloads/1.0/restlet-1.0.0.zip Best regards, Jerome Louvel http://www.noelios.com
Bill de hOra wrote: > Keith Gaughan wrote: > >> That's not to say that performing a GET on a resource _can't_ have >> side-effects, but those side-effects have to be such that if they went away >> the application's functionality would not be changed. That's one of the >> primary REST constraints in HTTP. > > Not quite; that treads close to implementation dependence. GET safety is > about accountability. You can't stop server owners doing stupid things > in their implementations, but you can assume it won't be your fault when > they do. In my own awkward way, that's part of what I was trying to say. The kind of side-effects I was thinking of as OK would be things like logging requests: it's a side effect, but it's not externally visible and if the server stopped doing it, the client would know none the better. K. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
Keith Gaughan wrote: > Bill de hOra wrote: > >> Keith Gaughan wrote: >> >>> That's not to say that performing a GET on a resource _can't_ have >>> side-effects, but those side-effects have to be such that if they went away >>> the application's functionality would not be changed. That's one of the >>> primary REST constraints in HTTP. >> Not quite; that treads close to implementation dependence. GET safety is >> about accountability. You can't stop server owners doing stupid things >> in their implementations, but you can assume it won't be your fault when >> they do. > > In my own awkward way, that's part of what I was trying to say. The kind of > side-effects I was thinking of as OK would be things like logging requests: > it's a side effect, but it's not externally visible and if the server > stopped doing it, the client would know none the better. Agreed. I think another possibility is where something happens in reaction to the GET, but the client isn't necessarily aware that this is how it is happening - e.g the ID generation you discussed earlier could be performed by incrementing a counter on GET but could also be performed by some sort of UUID algorithm. As long as the semantics of the resource are are "ID generator that guarantees uniqueness - representation gives current state" rather than "Next in series" then the fact that an increment is happening on GET is just an implementation artefact that doesn't concern the client.
While designing a REST API what's a good approach to account for versioning? - versioned URLs - a version query parameter - others?
On 18 Apr 2007, at 15:26, Keyur Shah wrote: > While designing a REST API what's a good approach to account for > versioning? > - versioned URLs A versioned base URL seems to be popular, like: http://api.example.com/v1/users/fish http://api.example.com/v1/tags > - a version query parameter Ugh! > - others? Many issues probably don't have to be shown in such a way, at least if you have one single access point (where the client don't need to find out what version you run). Adding for instance http:// api.example.com/v1/users/fish/tags to the above mentioned 'API' wouldn't break any existing clients, although it could be argued to be a new version. For data formats, using XML should make it possible to add elements and support more types of elements in without breaking old clients. (Note though that client might be doing lots of silly things such as just iterating over your elements without checking the tag name at all) -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
Internal web services here use versioned URIs, like /v1/foo. Another approach is to change the identifier when the underlying resource changes, but without naming it by version. So v1 is /foo and v2 is /bar. Another approach is to use content negotiation, so v1 does "Accept: foo" and v2 does "Accept: bar". -Lucas On 4/18/07, Keyur Shah <keyurva@...> wrote: > While designing a REST API what's a good approach to account for > versioning? > - versioned URLs > - a version query parameter > - others? > > > > > Yahoo! Groups Links > > > >
On 4/18/07, Keyur Shah <keyurva@...> wrote: > > While designing a REST API what's a good approach to account for > versioning? > - versioned URLs > - a version query parameter > - others? Could you clarify what you mean by versioning? Are you talking about changes to the message format, or changes required to the URI (unrelated to content-type)? Alan
On 19 Apr 2007, at 09:29, Alan Dean wrote: > Are you talking about changes to the message format, or changes > required to the URI (unrelated to content-type)? Interesting, versioning of the resource.. Could one use DeltaV (RFC 3253) for that purpose, or is it too complex to twiddle down the WebDAV way? It seems DeltaV adds versions as a separate resource, that could be a good inspiration. Say using a wiki as an example, it would be interesting to do at least: Which versions of resource R exists? When where they made, by whom? Give me version V of R. Additional versioning meta-data could be a commit message, or view a 'patch' or 'diff' - what was changed. -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
I understand that it's always important to model everything as a resource in a REST system and perform CRUD operations on them (so that GET, POST, PUT and DELETE can fit the bill). But there might be cases where you have to model non-CRUD operations in your REST system - and not all reasons are technical. You customers are used to a certain vocabulary which contains well-known and well-accepted verbs and you simply can't change them to fit the "resource" paradigm. So given this constraint that you absolutely must have non-standard verbs, what's the best way to model them? I have a blog entry on this and would appreciate your suggestions. http://abstractfinal.blogspot.com/2007/04/restful-urls-for-non-crud-operations.html Thanks, Keyur
Keyur Shah wrote: > So given this constraint that you absolutely must have non-standard > verbs, what's the best way to model them? > > I have a blog entry on this and would appreciate your suggestions. > > http://abstractfinal.blogspot.com/2007/04/restful-urls-for-non-crud-operations.html Eh, you're 3 different approaches are all the same? You're still GETting representations of resources, but using different URIs for the resources. POST allows you to do CRUDE rather than CRUD. Do that.
On 4/19/07, Keyur Shah <keyurva@...> wrote: > ~ Could you clarify what you mean by versioning? > > I meant changes to the message format (content-type) Here are some strategies that I can think of: 1) Stipulation In this strategy, the message stipulates the version e.g.: <?xml version="1.0"?> <message version="2.0" /> Of course, this strategy isn't only for xml - it's just easier to show in xml. Alternatively, for xml, the version can be represented by a namespace. This is the approach used by sitemaps, see http://www.sitemaps.org/protocol.html e.g.: <?xml version="1.0"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">...</urlset> 2) Backwards compatibility In this strategy, versioning is only additive - that is to say each subsequent version only extends, never replaces, and any extension is optional. This way, no version stipulation is required. This is, approximately, what html has done (not the xhtml dialect). 3) Standalone In this strategy, each 'version' is defined in isolation to any other. In effect, each version requires a separate MIME type. This is, de facto, what RSS did when it transitioned from RDF Site Syndication (application/rdf+xml) to Really Simple Syndication (application/xml). Regards, Alan Dean
On 4/19/07, Keyur Shah <keyurva@...> wrote: > I understand that it's always important to model everything as a > resource in a REST system and perform CRUD operations on them (so that > GET, POST, PUT and DELETE can fit the bill). But there might be cases > where you have to model non-CRUD operations in your REST system - and > not all reasons are technical. You customers are used to a certain > vocabulary which contains well-known and well-accepted verbs and you > simply can't change them to fit the "resource" paradigm. > > So given this constraint that you absolutely must have non-standard > verbs, what's the best way to model them? Stefan Tilkov dealt somewhat with the CRUD issue in his blog comment, so I'll come at this from a different angle: What vocabulary do you really have in mind? You mentioned "walk" and "talk" in your blog entry, but I got the impression that those were made-up examples. Any discussion might be more more useful if we considered the real problem.
Bob Haugen wrote: > > > On 4/19/07, Keyur Shah <keyurva@... <mailto:keyurva%40yahoo.com>> > wrote: > > I understand that it's always important to model everything as a > > resource in a REST system and perform CRUD operations on them (so that > > GET, POST, PUT and DELETE can fit the bill). But there might be cases > > where you have to model non-CRUD operations in your REST system - and > > not all reasons are technical. You customers are used to a certain > > vocabulary which contains well-known and well-accepted verbs and you > > simply can't change them to fit the "resource" paradigm. > > > > So given this constraint that you absolutely must have non-standard > > verbs, what's the best way to model them? > > Stefan Tilkov dealt somewhat with the CRUD issue in his blog comment, > so I'll come at this from a different angle: > > What vocabulary do you really have in mind? You mentioned "walk" and > "talk" in your blog entry, but I got the impression that those were > made-up examples. Any discussion might be more more useful if we > considered the real problem. Some examples: - any Plone URLs ending in /edit or /view - Atom protocol "edit" URIs. - MoinMoin ?action= URLs - Zimbra export URLs (URL per format) - citizensinformation.ie /entry.xml URLs (Atom URLs) Fwiw, I don't buy this: "You customers are used to a certain vocabulary which contains well-known and well-accepted verbs" it's usually the framework designers that are making you do this not customers (eg most Java frameworks are action/controller based). Take the frameworks away, and then it's DSL and Domain Model aficionados designing things in a certain way. Not customers. cheers Bill
Bill de hOra wrote: > > > Bob Haugen wrote: > > > > > > On 4/19/07, Keyur Shah <keyurva@... > <mailto:keyurva%40yahoo.com> <mailto:keyurva%40yahoo.com>> > > wrote: > > > I understand that it's always important to model everything as a > > > resource in a REST system and perform CRUD operations on them (so that > > > GET, POST, PUT and DELETE can fit the bill). But there might be cases > > > where you have to model non-CRUD operations in your REST system How are non-CRUD operations modeled in an RDBMS system? cheers Bill
Bob Haugen <bob.haugen@...> wrote:On 4/19/07, Keyur Shah wrote:
> So say I am google maps... I need to design a rest system that can
> perform 2 operations - find route and query an address... The REST way
> would probably be:
>
> http://google.com/maps/directions?from=foo&to=bar
> http://google.com/maps/location?address=blah
>
> However, if google maps had legacy APIs / SOAP web services such that
> the verbs findRoute and queryAddress were firmly instilled in the
> verbiage of their user community, it might be a difficult proposition
> for them to suddenly introduce a new vocabulary for the same set of
> operations to their users. The user community sees them as operations
> and not in terms of the resulting resources (directions and location).
> Legacy wins over technical correctness.
If whatever.com/maps created another set of resources
where /directions was replaced by /findRoute
and /location was replaced by /queryAddress
and everything else worked the same,
it's not a technical difference, just a naming difference.
Still just GETs with query args, but now with unstylish URIs.
---------------------------------
Ahhh...imagining that irresistible "new car" smell?
Check outnew cars at Yahoo! Autos.Bill de hOra wrote: > > > > On 4/19/07, Keyur Shah <keyurva@... > <mailto:keyurva%40yahoo.com> <mailto:keyurva%40yahoo.com>> > Fwiw, I don't buy this: > > "You customers are used to a certain vocabulary which contains > well-known and well-accepted verbs" > > it's usually the framework designers that are making you do this not > customers (eg most Java frameworks are action/controller based). Take > the frameworks away, and then it's DSL and Domain Model aficionados > designing things in a certain way. Not customers. I should say then that I buy the idea that URLs with "domain specific" verbs in them are rampant, and can't be conveniently ignored. cheers Bill
Hi, I'm interested in the REST architectural style for a few time so I hope to not say too much wrong things. (I talk about your blog post) I think your 3 propositions are actually the same. It's 3 ways to put the action in the URI. The mistake is here. There may be different solution to your problem. You say that in the REST architectural style "everything is a resource", you're right so we mustn't, i think, make a 1-to-1 relationship between our object model (for example :)) and the REST data model. I think that the mapping may be complex and don't have find yet interesting example of mapping between OO models and REST models. But maybe you have to find if there exists other resources on your source model which are not easy to translate to a REST resource. A second solution is to only use the Person resource. If we suppose we have to deal with a person resource and only that and we want that this person walk from 1 unit towards the north, we have to assume that the Person resource knows its position. The client has to know the current position of the Person. And to move the Person I think the client should POST to the server the intended state of the resource (the new position) Note: I didn't answer to your question. I just refuse the axiom " So given this constraint that you absolutely must have non-standard verbs". Hope it helps. -- benoit Keyur Shah a �crit : > > I understand that it's always important to model everything as a > resource in a REST system and perform CRUD operations on them (so that > GET, POST, PUT and DELETE can fit the bill). But there might be cases > where you have to model non-CRUD operations in your REST system - and > not all reasons are technical. You customers are used to a certain > vocabulary which contains well-known and well-accepted verbs and you > simply can't change them to fit the "resource" paradigm. > > So given this constraint that you absolutely must have non-standard > verbs, what's the best way to model them? > > I have a blog entry on this and would appreciate your suggestions. > > http://abstractfinal.blogspot.com/2007/04/restful-urls-for-non-crud-operations.html > <http://abstractfinal.blogspot.com/2007/04/restful-urls-for-non-crud-operations.html> > > Thanks, > Keyur > >
Keyur, What you need to do is think of your "custom" verbs in terms of state transfer. When person/1 walks, what changes state? What is the pre-state and what is the post-state? Can you define the state change as an idempotent operation (move person/1 to coordinates (x,y))? Or is it non-idempotent (move person n units north, e units east)? Use your standard verbs against the resources which suffer the change. Make sure you know what those resources are and that they are identified. Voila. Walden ----- Original Message ----- From: Keyur Shah To: rest-discuss@yahoogroups.com Sent: Thursday, April 19, 2007 12:24 PM Subject: [rest-discuss] RESTful URLs for non-CRUD operations I understand that it's always important to model everything as a resource in a REST system and perform CRUD operations on them (so that GET, POST, PUT and DELETE can fit the bill). But there might be cases where you have to model non-CRUD operations in your REST system - and not all reasons are technical. You customers are used to a certain vocabulary which contains well-known and well-accepted verbs and you simply can't change them to fit the "resource" paradigm. So given this constraint that you absolutely must have non-standard verbs, what's the best way to model them? I have a blog entry on this and would appreciate your suggestions. http://abstractfinal.blogspot.com/2007/04/restful-urls-for-non-crud-operations.html Thanks, Keyur __________ NOD32 2208 (20070421) Information __________ This message was checked by NOD32 antivirus system. http://www.eset.com
Hello, I am trying to figure out the real meaning of client-stateless-server as it applies to REST. I have looked at the thesis, and went through this newsgroup but am not completely satisfied at what client- stateless-server really means. Representational State Transfer should have all the details in a request so that a server can handle the request (thus any server in a farm that has the service can process a request that started at another server--scalable). Wouldn't this mean that any form of a database that keeps state on the server and not the client, not be RESTful? As an example, it could be a shopping cart on Amazon.com. Amazon.com stores the state of the shopping cart on the server so that when a customer logs in, they can see the items they have saved. How is this RESTful? State is being stored on the server, and any further request to the server had already depended on the fact that the prior requests setup the shopping cart to add items, etc. To be RESTful, I would think that each request would carry the list of items in the shopping cart. So, if you look at a Session in a web application. I see this as a database. It stores information on the server, although the session is meant to be only temporary--it may very well be persisted to a file store for clustering. This of course has been argued before as not being RESTful. How would this be different from having a database storing all the information its clients have changed? That database has to be shared among all servers which is not scalable. Can someone please explain this to me? -David
"siefert.david" <siefert.david@...> writes: > As an example, it could be a shopping cart on Amazon.com. Amazon.com > stores the state of the shopping cart on the server so that when a > customer logs in, they can see the items they have saved. How is > this RESTful? Amazon's shopping cart is NOT RESTfull. The state is stored in a session object that is associated with you by a cookie. It *can* be done RESTfully. If the shopping cart is a resource then you can POST (or PUT) items into it. The shopping cart can then be scaled in the same way other resources are scaled. > request to the server had already depended on the fact that the prior > requests setup the shopping cart to add items, etc. To be RESTful, I > would think that each request would carry the list of items in the > shopping cart. That would be another way of doing it. Much less efficient than just keeping a resource. There are resource based shopping carts out there. Check the list archives and you'll find them. > So, if you look at a Session in a web application. I see this as a > database. It stores information on the server, although the session > is meant to be only temporary--it may very well be persisted to a > file store for clustering. This of course has been argued before as > not being RESTful. How would this be different from having a > database storing all the information its clients have changed? That > database has to be shared among all servers which is not scalable. Well... pushing sessions back into the database does make a non-RESTfull app more scalable because databases tend to be more scalable than app server session load balancers. But such apps do tend to have higher latency. And you have to be amazon to scale up to amazon request levels. If you do things by resource then you scale at the webserver which is cheap and simple. You then need to aggregate that data (when you need to aggregate) but the collectivization tends to be less frequent than atomic writes. Right? -- Nic Ferrier http://www.tapsellferrier.co.uk
--- In rest-discuss@yahoogroups.com, Nic James Ferrier <nferrier@...> wrote: > > "siefert.david" <siefert.david@...> writes: > > > As an example, it could be a shopping cart on Amazon.com. Amazon.com > > stores the state of the shopping cart on the server so that when a > > customer logs in, they can see the items they have saved. How is > > this RESTful? > > Amazon's shopping cart is NOT RESTfull. The state is stored in a > session object that is associated with you by a cookie. Sorry, I was not talking about the interface in this case. I was talking abstractly in that Amazon.com stores your shopping cart items in a database. It is the restoration of the database records that is not RESTful. > > It *can* be done RESTfully. If the shopping cart is a resource then > you can POST (or PUT) items into it. The shopping cart can then be > scaled in the same way other resources are scaled. > But the state (namely, your shopping cart) IS stored on the server. > > > request to the server had already depended on the fact that the prior > > requests setup the shopping cart to add items, etc. To be RESTful, I > > would think that each request would carry the list of items in the > > shopping cart. > > That would be another way of doing it. Much less efficient than just > keeping a resource. > > There are resource based shopping carts out there. Check the list > archives and you'll find them. > I searched the list and found an example. It used cookies, but more interestingly was the response by Roy T. Fielding. It makes some mention of how customization on the web (personalization) is not RESTful. > > > So, if you look at a Session in a web application. I see this as a > > database. It stores information on the server, although the session > > is meant to be only temporary--it may very well be persisted to a > > file store for clustering. This of course has been argued before as > > not being RESTful. How would this be different from having a > > database storing all the information its clients have changed? That > > database has to be shared among all servers which is not scalable. > > Well... pushing sessions back into the database does make a > non-RESTfull app more scalable because databases tend to be more > scalable than app server session load balancers. > > But such apps do tend to have higher latency. And you have to be > amazon to scale up to amazon request levels. > > > If you do things by resource then you scale at the webserver which is > cheap and simple. You then need to aggregate that data (when you need > to aggregate) but the collectivization tends to be less frequent than > atomic writes. Right? > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk > Thank you for the resource pointers. It has helped me gain a better understanding. As in the shopping cart example, to be truly RESTful, wouldn't the sequence go like so: 1. User browses to store site 2. User adds an item (GET http://store.com/cart/item/?id=5) 3. The server responds with another page which has each link reference the fact the user has item 5 in their shopping cart. 4. The user browses to another page (again, carrying with it the state that it has item 5 in the cart). 5. The user adds another item (http://store.com/cart/item/?id=5&id=9 6. The server responds with another page which has each link reference the face the user now has item 5 and item 9 in their shopping cart. Once the user closes their browser, the only way to see the items in the shopping cart would be through digging it out from the cache, or bookmarking the shopping cart page with the items. Wouldn't this be a RESTful shopping cart? There are many pitfalls of course in this example (user goes to another computer, or browser does not maintain cache, bandwidth when user has many items in shopping cart, etc). However, it is scalable because it would not matter what server the request goes to, the request contains the state of the users shopping cart and is therefore RESTful. Am I wrong? I appreciate whatever guidance you can give me. Thanks, David
"siefert.david" <siefert.david@...> writes: > Sorry, I was not talking about the interface in this case. I was > talking abstractly in that Amazon.com stores your shopping cart items > in a database. It is the restoration of the database records that is > not RESTful. Ermmm.... > But the state (namely, your shopping cart) IS stored on the server. But it's not stored as a resource. It's hidden. Or at least the interactions with it are. > I searched the list and found an example. It used cookies, but more > interestingly was the response by Roy T. Fielding. It makes some > mention of how customization on the web (personalization) is not > RESTful. Not so. Personalization can be done RESTfully. I think you misunderstood Roy. > Thank you for the resource pointers. It has helped me gain a better > understanding. > > As in the shopping cart example, to be truly RESTful, Whoa Nelly! Don't go down that road. There is no "truly RESTfull". REST is more like the pirate code, just guidelines. Using RESTfull principles can make your app more scalable more cost effectively. > 1. User browses to store site > 2. User adds an item (GET http://store.com/cart/item/?id=5) > 3. The server responds with another page which has each link > reference the fact the user has item 5 in their shopping cart. > 4. The user browses to another page (again, carrying with it the > state that it has item 5 in the cart). > 5. The user adds another item (http://store.com/cart/item/?id=5&id=9 > 6. The server responds with another page which has each link > reference the face the user now has item 5 and item 9 in their > shopping cart. This doesn't sound right. You seem to be using GET to alter state which is a big no no, not just in REST but more generally. How about: 1. nic browses to store site 2. nic hits "buy me" button on book about Orcas 3. AJAX sends a POST with the book url to /nicferrier/basket 4. nic browses to another page 5. nic hits "but me" button on sandwich toaster 6. AJAX sends a POST with the sandwich toaster url to /nicferrier/basket When I want to look at my shopping basket I can just: GET /nicferrier/shoppingbasket Well. This isn't much different from what you can do with cookies. There are benefits that REST has given you: 1. it is more scalable because now everybody's shopping basket can be moved from server to server... 1 per server if you really wanted 2. it enables shared shopping baskets - something that cookies/sessions really have trouble doing unless they are serialized to a single point (which is bad for latency reasons). > Wouldn't this be a RESTful shopping cart? There are many pitfalls of > course in this example (user goes to another computer, or browser > does not maintain cache, bandwidth when user has many items in > shopping cart, etc). However, it is scalable because it would not > matter what server the request goes to, the request contains the > state of the users shopping cart and is therefore RESTful. No... it wouldn't be RESTfull particularly I don't think. The state is stored in the URLs. This is just like a form of URL re-writing with session IDs. The trouble with that is that it makes the site difficult to cache by proxies and other intermediaries. This is a big deal because such things are a key way to cope with disconnectedness. -- Nic Ferrier http://www.tapsellferrier.co.uk
[ Attachment content not displayed ]
siefert.david wrote: > Representational State Transfer should have all the details in a > request so that a server can handle the request (thus any server in a > farm that has the service can process a request that started at > another server--scalable). Wouldn't this mean that any form of a > database that keeps state on the server and not the client, not be > RESTful? > > As an example, it could be a shopping cart on Amazon.com. Amazon.com > stores the state of the shopping cart on the server so that when a > customer logs in, they can see the items they have saved. How is > this RESTful? State is being stored on the server, and any further > request to the server had already depended on the fact that the prior > requests setup the shopping cart to add items, etc. There's a bit of confusion here about the notion of state. It's OK for the server to have state, and for that state to be updated as a result of requests, in fact it's quite normal. The issue in a REST design is about application state and its relation to scalability, specifically that if a server is required to store transactional state that associates multiple request/response pairs with one another in a state machine, as with many Internet protocols like IMAP or FTP, it has a negative effect on scalability and limits the number of potential clients that can be served. Therefore in a REST system (e.g. HTTP) all this transactional state is made available in each request. For non-application functionality, it's OK for the client and the server to share a secret, like the authenticated user's passphrase. But generally, the plan is to expose any data related to application functionality as a resource, identified as a URL, and describe the application as a set of operations that transfer data representing that resource to and from the server. So you wouldn't design a shopping cart using a shared secret, e.g. a cookie that holds an opaque identifier that identifies a record in some database accessible to the server; you'd design it as a shopping-cart resource that you can GET with its URL to see the current state of it (assuming you have the authorisation to do so), and POST to with a new item to be added to it, etc. > To be RESTful, I > would think that each request would carry the list of items in the > shopping cart. You could do that (PUT the entire state of the shopping cart), although it's not required to be RESTful. The RESTful part of it is that each shopping cart has a URL that you interact with by pushing and pulling (bits of) shopping cart representation.
On Apr 26, 2007, at 2:30 AM, Chris Burdess wrote: > There's a bit of confusion here about the notion of state. It's OK > for the server to have state, and for that state to be updated as a > result of requests, in fact it's quite normal. The issue in a REST > design is about application state and its relation to scalability, > specifically that if a server is required to store transactional > state that associates multiple request/response pairs with one > another in a state machine, as with many Internet protocols like IMAP > or FTP, it has a negative effect on scalability and limits the number > of potential clients that can be served. Therefore in a REST system > (e.g. HTTP) all this transactional state is made available in each > request. For non-application functionality, it's OK for the client > and the server to share a secret, like the authenticated user's > passphrase. But generally, the plan is to expose any data related to > application functionality as a resource, identified as a URL, and > describe the application as a set of operations that transfer data > representing that resource to and from the server. So you wouldn't > design a shopping cart using a shared secret, e.g. a cookie that > holds an opaque identifier that identifies a record in some database > accessible to the server; you'd design it as a shopping-cart resource > that you can GET with its URL to see the current state of it > (assuming you have the authorisation to do so), and POST to with a > new item to be added to it, etc. Right, but only if you want such state to persist for a user. The best way to do a shopping cart RESTfully is to use standard mark-up to describe items that can be purchased and allow the user agent to "move" items from whatever page they happen to be looking at into their own browser's virtual cart. The mark-up can describe where to go for check-out, and the cart could contain items from many different merchants. In other words, all of the state remains on the client. The reason we don't do it that way now is partly because shops don't believe in waiting for standard media types to be updated, and partly because Netscape became gun-shy after the response to their early HTML extensions. ....Roy
http://pluralsight.com/blogs/tewald/archive/2007/04/26/46984.aspx "It's depressing to think that SOAP started just about 10 years ago and that now that everything is said and done, we built RPC again."
"Alan Dean" <alan.dean@...> writes: > http://pluralsight.com/blogs/tewald/archive/2007/04/26/46984.aspx > > "It's depressing to think that SOAP started just about 10 years ago and > that now that everything is said and done, we built RPC again." Interesting that so many people are pushed away from REST when they consider stored procs. To me stored procs are a key comparison for REST. I write RESTfull stored procs a lot. They nearlly always conform to CRUD and mostly represent a single identifiable resource (a view a lot of the time). Heh. Funny. -- Nic Ferrier http://www.tapsellferrier.co.uk
This arrived in my inbox this morning:
To: nferrier@...
Subject: Who's Afraid of SOA Implementation?
Date: Fri, 27 Apr 2007 10:03:40 +0100 (IST)
From: "BEA IT2IT Insight "<it2it@...>
--text follows this line--
============================================================
IT2IT Insight April 2007
============================================================
The program for IT professionals, by IT professionals
So you know you need a service-oriented architecture-but
where to begin? Is this month's issue of IT2IT Insight, we
explore laying the groundwork for SOA.
- "BEA AquaLogic Service Bus Behind the Firewall" walks you
through integrating a service bus into your SOA.
- In "SOA Governance," you'll learn about the basics of
governance and how to delegate the myriad responsibilities
for managing your new architecture.
- And finally, we bring you the first in a series of four
podcasts featuring Ed Kourany, BEA's Executive Director of
Consulting. Over the series, Ed will explain how to sell
the vision of SOA up the chain of command to ensure your
project gets critical approvals-and funding.
I like this bit best:
Ed will explain how to sell
the vision of SOA up the chain of command to ensure your
project gets critical approvals-and funding.
OR... you could just not bother and do it with REST instead.
--
Nic Ferrier
http://www.tapsellferrier.co.uk
Hi, Please bear with my long email as SOAP, Rest and Restlet are new to me. I am working on a SOA based web services project where different services talk to each other. My current web service encapsulates an application that is wired with Spring. In my web service usage, the intent is to pass around the query and response in the form of xml along with other string query parameters. Currently my web service is exposed through Soap. I recently came across that one of Soap limitations is that the data element size in the xml document is 32K (I am not sure if it is true yet, I appreciate your input here). To avoid this restriction, I started exploring Rest. It looks like Rest is an architectural style and has the following main advantages over SOAP: - it does not need development tools -the url of the resources can be internally mapped to the changed service's api. -usage of http get -usage of nouns in uri and there is no need for ? in the uri. But I intent to expose my web service both thru SOAP and Rest. Currently my SOAP based web service is running in a web container and I want to add Rest version of the web service. What I am not clear is how to implement Rest and i have the following questions: 1) does Rest based web sevice invlove simplying exposing an xml document that has the uris for different resources to the web service clients and return the queried data in xml form? 2) I do not see the need for the mvc pattern to be implemented here as my service is not a true web application. 3) I do not see the need to use Restlet Frame Work since my intent is to use the web container ONLY to support Soap based web service. Also I do not want to embed the overhead of routers and resources and servletconvertor. Please advise. Also an example code of the Rest based web service is appreciated. Thanks in advance for your valuable input, time and interest.
"skkcr" <skkcr@...> writes: > 1) does Rest based web sevice invlove simplying exposing an xml > document that has the uris for different resources to the web > service clients and return the queried data in xml form? Yes. If you did that you would have a usable, RESTfull service. There is debate here about what that XML should be. But any XML is acceptable. The BEST XML is a matter of some debate. > 2) I do not see the need for the mvc pattern to be implemented here > as my service is not a true web application. Seems fair. MVC is *just* a programming pattern. You don't need to use it. > 3) I do not see the need to use Restlet Frame Work since my intent > is to use the web container ONLY to support Soap based web service. > Also I do not want to embed the overhead of routers and resources > and servletconvertor. That's fine. You don't need to use Restlet to make a REST service. I have *never* used Restlet's and I've been designing REST based services since 2002. > Also an example code of the Rest based web service is appreciated. Have a poke around on the REST wiki... there are good examples of basic REST technologies. -- Nic Ferrier http://www.tapsellferrier.co.uk
Alan Dean wrote: > > > http://pluralsight.com/blogs/tewald/archive/2007/04/26/46984.aspx > <http://pluralsight.com/blogs/tewald/archive/2007/04/26/46984.aspx> > > "It's depressing to think that SOAP started just about 10 years ago and > that now that everything is said and done, we built RPC again." I recall Mark Baker and James Strachan talking about paradigms and mental gear shifts a few years back when it comes to 'getting' REST. I mean seriously, what is there to get? How can an *entire industry* not get REST until Q406 or thereabouts? I don't buy it. I think the wheels are falling off the WS industry wagon, and SOA will be next. We're witnessing one of those once a decade industry re-alignments. cheers Bill
Nic James Ferrier wrote: > "skkcr" <skkcr@...> writes: > >> 1) does Rest based web sevice invlove simplying exposing an xml >> document that has the uris for different resources to the web >> service clients and return the queried data in xml form? > > Yes. If you did that you would have a usable, RESTfull service. > > There is debate here about what that XML should be. But any XML is > acceptable. The BEST XML is a matter of some debate. > XML has nothing to do with REST and vice versa. REST merely describes how you exchange documents (excuse me, representations). It says nothing about the format of these documents. There are many useful REST services that do not use XML, and many useful XML services that do not use REST. That said, it is often a good idea to use XML to exchange representations, but don't get fooled into thinking you have to. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
I've been invited to give a short introduction to REST in about a week. Given that the presentation will be fairly short (15-20mins), I'm wondering what are the key aspects that it should cover. Other presenters at the same event will cover SOAP, WCF, and SOA. As before, I'll make the final slides available for others to re-use or inspire themselves from. Looking forward to your suggestions. Thanks. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
* Steve G. Bjorg <steveb@...> [2007-04-30 16:40]: > I've been invited to give a short introduction to REST in about > a week. Given that the presentation will be fairly short > (15-20mins), I'm wondering what are the key aspects that it > should cover. Other presenters at the same event will cover > SOAP, WCF, and SOA. As before, I'll make the final slides > available for others to re-use or inspire themselves from. > > Looking forward to your suggestions. I suggest two recent weblog posts as inspiration: • I finally get REST. Wow. http://pluralsight.com/blogs/tewald/archive/2007/04/26/46984.aspx Ted Ewald gives a very simple, sweet and to-the-point explanation of “hypermedia as the engine of application state.” • Squid is My Service Bus http://www.mnot.net/blog/2007/04/29/squid I *really* like this one. Mark Nottingham demonstrates the power that naturally falls out of declarative systems. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Folks, Please, excuse my ignorance, if this was discussed earlier but did anyone consider the impact of URL size limitation on RESTful application implementation in real world? To the best of my knowledge, RFC do not limit the size of URL or GET query string. However, quick research on the net suggests that IE, for instance, can't accommodate more than 2083 characters. I'm inclined to believe that 2083 characters (not bytes) essentially is the limit of state information that can be included in RESTful request. In reality, the real limit is even lower due to URLencoding requirements. Any thoughts or comments are highly appreciated and welcomed! Cheers, Hovhannes Tumanyan
On Mon, 2007-04-30 at 21:20 +0000, hovhannes_tumanyan wrote:
> Please, excuse my ignorance, if this was discussed earlier but did
> anyone consider the impact of URL size limitation on RESTful
> application implementation in real world?
Sure. REST is an architectural style. HTTP is a particular
implementation. Sometimes the real world gets in the way of
implementing the ideal. Oh well.
> believe that 2083 characters (not bytes) essentially is the limit of
> state information that can be included in RESTful request.
You might mean "in a HTTP request".
I've hit this limit. I converted the GET to a POST, though without
creating a new subordinate resource. I suppose I could have created one
to be more Ideal, but it just didn't matter in the context. I grumbled
very much when I encountered it, and I grumble every time I re-encounter
the page, but I get on with life.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
Yes, it's really unfortunate that GET can't take a body. :( You could do a POST with Content-Type set to application/x-www-form- urlencoded and then use the X-HTTP-Method-Override header to override the http method to be GET. Probably not a good idea though. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org On Apr 30, 2007, at 2:20 PM, hovhannes_tumanyan wrote: > Folks, > Please, excuse my ignorance, if this was discussed earlier but did > anyone consider the impact of URL size limitation on RESTful > application implementation in real world? > > To the best of my knowledge, RFC do not limit the size of URL or GET > query string. However, quick research on the net suggests that IE, for > instance, can't accommodate more than 2083 characters. I'm inclined to > believe that 2083 characters (not bytes) essentially is the limit of > state information that can be included in RESTful request. > In reality, the real limit is even lower due to URLencoding > requirements. > > Any thoughts or comments are highly appreciated and welcomed! > > Cheers, > Hovhannes Tumanyan > > >
* Steve Bjorg <steveb@...> [2007-05-01 00:15]: > You could do a POST with Content-Type set to > application/x-www-form-urlencoded and then use the > X-HTTP-Method-Override header to override the http > method to be GET. How can you do that from within a HTML form? IE was mentioned as a culprit, so the workaround has to work with the means available to HTML, and setting arbitrary headers is not one of them. If it was, I’d suggest to just add an `X-Real-URI` header and stuff the URI in there, then request a generic short redirector URI. (Do not forget to `Vary` on your custom header!) That is as much of a hack, but it leaves the verb untouched so that intermediaries have a chance to do something useful even under these adverse conditions. As it is, POST tunneling is the only way to get out of the bind. Yuck… but what can you do. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I'm not in favor of using POST method due to additional implicit state that may complicate caching and error recovery capabilities of RESTful systems. Unfortunately, better solution may not be possible in given constraints. Hopefully vendors will adjust browsers and other web clients according to the spec, i.e. unlimited URL length. Cheers, Hovhannes --- In rest-discuss@yahoogroups.com, "A. Pagaltzis" <pagaltzis@...> wrote: > > * Steve Bjorg <steveb@...> [2007-05-01 00:15]: > > You could do a POST with Content-Type set to > > application/x-www-form-urlencoded and then use the > > X-HTTP-Method-Override header to override the http > > method to be GET. > > How can you do that from within a HTML form? IE was mentioned as > a culprit, so the workaround has to work with the means available > to HTML, and setting arbitrary headers is not one of them. > > If it was, I’d suggest to just add an `X-Real-URI` header and > stuff the URI in there, then request a generic short redirector > URI. (Do not forget to `Vary` on your custom header!) That is > as much of a hack, but it leaves the verb untouched so that > intermediaries have a chance to do something useful even under > these adverse conditions. > > As it is, POST tunneling is the only way to get out of the bind. > Yuck… but what can you do. > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> >
On 4/30/07, hovhannes_tumanyan <hovhannes_tumanyan@...> wrote: > > Folks, > Please, excuse my ignorance, if this was discussed earlier but did > anyone consider the impact of URL size limitation on RESTful > application implementation in real world? > > To the best of my knowledge, RFC do not limit the size of URL or GET > query string. However, quick research on the net suggests that IE, for > instance, can't accommodate more than 2083 characters. I'm inclined to > believe that 2083 characters (not bytes) essentially is the limit of > state information that can be included in RESTful request. > In reality, the real limit is even lower due to URLencoding > requirements. > > Any thoughts or comments are highly appreciated and welcomed! More restrictive than that is the IIS path length limitation of (up to) 260 characters. I don't know if other web servers suffer the same limitation. Essentially, IIS assumes that the path represents a physical file and applies the windows MAX_PATH length (this won't affect anything after ? or # in the URI). If you nest the virtual directory deep inside the file system, this will reduce the supported path length even further. Regards, Alan Dean http://thoughtpad.net/alan-dean
http://astoria.mslivelabs.com/ ..the revolution has started! www.mourant.com The information in this email (and any attachments) may contain privileged and confidential information and is intended solely for the use of the individual or organisation to whom it is addressed. The contents may not be disclosed or used by anyone other than the addressee(s). If you are not the intended recipient, please notify Mourant immediately at the above e-mail address or telephone +44 (0)1534 609 000 and delete all copies of the e-mail from your system. Mourant cannot accept any responsibility for the accuracy, completeness or timely delivery of this message as it has been transmitted over a public network. If you suspect that the message may have been intercepted or amended, please call the sender. Although Mourant scans e-mail and attachments for viruses, it does not guarantee that either are virus-free and accepts no liability for any damage sustained as a result of viruses. Replies to this e-mail may be monitored by Mourant Limited for business and operational reasons. Mourant is not liable for any views or opinions expressed by the sender where this is a non-business e-mail. Further information on the Mourant group of companies including their registered offices and where relevant, details of their local regulators can be accessed via the Mourant website www.mourant.com
LOL. These guys tried to recruit me about a couple years ago (Pablo Castro, in fact - mentioned on the page). I kept telling them they needed to put an HTTP interface on their stuff, and they just couldn't grok it. No, they didn't realize who I was. 8-) On 5/1/07, Philip Ruelle <Philip.Ruelle@...> wrote: > > http://astoria.mslivelabs.com/ > > ..the revolution has started! > > > www.mourant.com > The information in this email (and any attachments) may contain privileged and confidential information and is intended solely for the use of the individual or organisation to whom it is addressed. The contents may not be disclosed or used by anyone other than the addressee(s). If you are not the intended recipient, please notify Mourant immediately at the above e-mail address or telephone +44 (0)1534 609 000 and delete all copies of the e-mail from your system. > > Mourant cannot accept any responsibility for the accuracy, completeness or timely delivery of this message as it has been transmitted over a public network. If you suspect that the message may have been intercepted or amended, please call the sender. > > Although Mourant scans e-mail and attachments for viruses, it does not guarantee that either are virus-free and accepts no liability for any damage sustained as a result of viruses. > > Replies to this e-mail may be monitored by Mourant Limited for business and operational reasons. > > Mourant is not liable for any views or opinions expressed by the sender where this is a non-business e-mail. > > Further information on the Mourant group of companies including their registered offices and where relevant, details of their local regulators can be accessed via the Mourant website www.mourant.com > > > > > Yahoo! Groups Links > > > >
[ Attachment content not displayed ]
Philip Ruelle wrote: > http://astoria.mslivelabs.com/ One of the primary advantages of a REST approach is that you don't need a massive framework and tools to implement it. I think these people are missing the point.
Hello i am new to REST, but i like the approach and want to use it in my project. We have tests and each test has a description. This contains the input parameters of the test, the type of test ect. A test can be executed, and after the test has finished you get a test report. I think of the following REST model: Resources: TestRecipe / TestReport To put a test recipe: PUT http://Roger/Tests/TestRecipe with <testrecipe> as XML document. The URI of the test recipe is returned. With GET http://Roger/Tests/TestRecipe you will get all available test recipes. The problem i have is to run a test recipe. A test recipe can be started multiple times. POST http://Roger/Tests/TestRecipe/TestRecipe12?cmd=run This will start a test run and the URI of the TestReport is returned. Within the TestReport the URI of the corresponding TestRecipe is put. The test results can be viewed by: GET http://Roger/Tests/TestReport/TestReport12 However should this last resource be created right after creating the run of the test recipe? The test report is available only after the test is ready! Love to hear your comments! Roger vd Kimmenade
>IthinkofthefollowingRESTmodel: > >Resources:TestRecipe/TestReport > >Toputatestrecipe: > >PUT >http://Roger/ >Tests/TestRecipe > >with as XML document. The URI of the test recipe is >returned. If you use PUT, the client should PUT to the URI of the test recipe. If you want the server to create the new URI, the client should POST to a a URI like http://Roger/Tests/TestRecipes, and the server should return the URI of the newly created recipe resource in the Location header. > > >WithGET >http://Roger/ >Tests/TestRecipe >youwillgetallavailabletest recipes. Sounds good. >Theproblemihaveistorunatestrecipe.Atestrecipecanbe > > >startedmultipletimes. > > >POST >http://Roger/ >Tests/TestRecipe >/TestRecipe12? >cmd=run > >ThiswillstartatestrunandtheURIoftheTestReportisreturned. Avoid using commands/verbs in URIs. Alternatives include: POST to http://Roger/Tests/TestRecipe/TestRecipe12 (test parameters in POST body) POST to http://Roger/Tests/TestRecipe/TestRecipe12/TestRuns (test parameters in POST body) PUT a new test recipe representation to http://Roger/Tests/TestReceipe/TestRecipe12 that includes the parameters for the new test run >WithintheTestReporttheURIofthecorrespondingTestRecipeisput. > > >Thetestresultscanbeviewedby: > > >GET >http://Roger/ >Tests/TestReport >/TestReport12 > > >Howevershouldthislastresourcebecreatedrightaftercreatingthe > >runofthetestrecipe?Thetestreportisavailableonlyafterthe > > >testisready! > You could create a test report resource with a state of "running," and possibly an expected completion time. The resource would change when the test completes. Or you could do a "temporary redirect" (307) to a resource the represents incomplete test reports, or you could return a "no content" (204) until the test report is available. Not sure about that last one, others on the list might want to chime in. Good luck, Kevin Christen
--- In rest-discuss@yahoogroups.com, Chris Burdess <dog@...> wrote: > > Philip Ruelle wrote: > > http://astoria.mslivelabs.com/ > > One of the primary advantages of a REST approach is that you don't > need a massive framework and tools to implement it. I think these > people are missing the point. > I agree - REST and simplicity seem to go hand in hand. For the SnapLogic project, we decided very early on to keep things as lightweight as possible, to avoid exactly this need for a massive tools framework to support the system. As a result of that decision, we implemented a developer oriented, code level interface to resources in the the system as the basic layer. This interface provides a way to build higher level tools (like the initial GUI), while still allowing the use of the system to define and manipulate data resources without requiring a complicated tool chain.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Kevin" == Kevin Christen <kevin_christen@...> writes:
Kevin> If you use PUT, the client should PUT to the URI of the
Kevin> test recipe. If you want the server to create the new URI,
Kevin> the client should POST to a a URI
You can perfectly well PUT to a new URI and create it.
- --
All the best,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your outdated email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGOOPzIyuuaiRyjTYRArrjAKDwLtqm8BCwqC6dAp19A1A3fcyJDgCgic0B
xXikJj5A3CmGyaC0Tinu9hs=
=d9DQ
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "rogervdkimmenade" == rogervdkimmenade <roger.van.de.kimmenade@...> writes:
rogervdkimmenade> Hello i am new to REST, but i like the approach
rogervdkimmenade> and want to use it in my project.
Good! But it doesn't look like you did :-)
As things look still pretty confusing, at least for me, I suggest you
try to specify everything in terms of only the four verbs operating on
resources.
PUT for example *replaces* or *creates* a URI. Doesn't look to me you
used it that way.
For starting a test you should use POST. But another option would be
to GET a test, which would run it and return the report. Depending on
the length of the test that might be feasible approach.
Thinks like ?cmd=run don't look REST like at all.
It would be better to POST a test id to a run queue or so.
Hopes this helps a bit.
- --
Regards,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your outdated email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGOOTuIyuuaiRyjTYRAvxYAJ4gumSTsElIbWHb00IJe0sdfr24lgCfUCnP
QMccBLC+gzeKsZmVneG/1bI=
=hoWg
-----END PGP SIGNATURE-----
Well, perhaps they need massive frameworks and tools to stay employed? > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Chris Burdess > Sent: Wednesday, May 02, 2007 12:52 AM > To: Philip Ruelle > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] First Tim Ewald, now the MS Data > Access & Storage team.... > > Philip Ruelle wrote: > > http://astoria.mslivelabs.com/ > > One of the primary advantages of a REST approach is that you > don't need a massive framework and tools to implement it. I > think these people are missing the point. > > > > Yahoo! Groups Links > > >
On Wed, 2007-05-02 at 17:57 -0700, Mike Dierken wrote:
> Well, perhaps they need massive frameworks and tools to stay employed?
Perhaps, but it's unclear how "massive" this is, especially given some
of the benefits. It looks like a maybe slightly over-engineered way to
expose resources and queries via HTTP, via XML, JSON and RDF. Good on
them, for that. I confess I've not really looked at it past the .DOCs
at the web site...
The things that are more worrisome to me are the use of brackets to
index resources (e.g., </customer>, </customer[42]>) rather than another
slash for hierarchy. OTOH, I've never seen really useful benefits of
hierarchical resources. Also, I'm not sure that '[]' are allowed in
URIs... Obviously, they could be encoded, but it doesn't seem wise to
choose a character that needs to be {en,de}coded all the time.
Also potentially worrisome is embedding a structured query language in
the non-query-part of URIs, e.g. </customer[Active eq true]>. Of
course, having a query language is good, but it feels "nicer" to encode
it in the query part, somehow ... though maybe I'm just getting
distracted because they use the same word "query". I mean, the
aforementioned query *is* a distinct resource, and can be cached,
respond to methods, &c.
Has anyone delved deeper into it?
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
On 5/2/07, mikeyp_falco <mikeyp@...> wrote: > > --- In rest-discuss@yahoogroups.com, Chris Burdess <dog@...> wrote: > > > > Philip Ruelle wrote: > > > http://astoria.mslivelabs.com/ > > > > One of the primary advantages of a REST approach is that you don't > > need a massive framework and tools to implement it. You do need a massive framework and tools. It's just that those tools are shipping and functional, instead of existing only in the minds of PDF authors producing vaporware. We call these massive frameworks and tools HTTP clients and servers. It is a big mistake to build new massive frameworks on top of a massive framework if you don't need to. Of course, now the software industry will now get to compete with a more clueful bunch of MS HTTP products, instead of the WS-Bogus nonsense you could immediately write off and never think about again. The Astoria stuff may not be the prettiest, but it looks to me like it will *actually work* without an army of consultants, which is a big change. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
On 2 May 2007, at 09:56, rogervdkimmenade wrote: > Within the TestReport the URI of the corresponding TestRecipe is put. > The test results can be viewed by: > GET http://Roger/Tests/TestReport/TestReport12 (BTW, I don't really fancy the CamelCase style URLs, but I guess that's not as important.. and do we need to repeat 'Test' everywhere when it's under /Test ?) What about including the test run as a real resource? After all it has state (running, finished, etc) and references other resources, such as the recipe and the report. (I'm shortening the URLs and content): POST /TestRecipe <testrecipe> .. 201 Created Location: /TestRecipe/13 POST /TestRun <testrun> <testrecipe xlink:href="/TestRecipe/13" /> </> 201 Created Location: /TestRun/192 GET /TestRun/192 <testrun> <testrecipe xlink:href="/TestRecipe/13" /> <status>Running</status> .. (Could include data such as when it was started etc) GET /TestRun/192 <testrun> <testrecipe xlink:href="/TestRecipe/13" /> <status>Finished</status> <result>Failed</result> <report xlink:href="/TestRun/192/report" /> .. GET /TestRun/192/report <testreport> .. This means that if some other conditions have changed, you can post a new test run using the old test recipe URI. -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
Mark Baker wrote: > > > LOL. These guys tried to recruit me about a couple years ago (Pablo > Castro, in fact - mentioned on the page). I kept telling them they > needed to put an HTTP interface on their stuff, and they just couldn't > grok it. No, they didn't realize who I was. 8-) I'm more interested to see how these ideas gets productized inside MSFT, specifically how it works its way into sharepoint*. It segues nicely with the current arguments around open document formats, which are as much about server sided repositories as they are about word processors. cheers Bill * and to a lesser degree, biztalk
Robert Sayre wrote: > > > On 5/2/07, mikeyp_falco <mikeyp@... > <mailto:mikeyp%40snaplogic.org>> wrote: >> >> --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Chris Burdess <dog@...> wrote: >> > >> > Philip Ruelle wrote: >> > > http://astoria.mslivelabs.com/ <http://astoria.mslivelabs.com/> >> > >> > One of the primary advantages of a REST approach is that you don't >> > need a massive framework and tools to implement it. > > You do need a massive framework and tools. It's just that those tools > are shipping and functional, instead of existing only in the minds of > PDF authors producing vaporware. We call these massive frameworks and > tools HTTP clients and servers. It is a big mistake to build new > massive frameworks on top of a massive framework if you don't need to. > I guess I just don't consider an http server or client library 'a massive framework.' On the other hand, with 100+ million public web sites, plus supporting servers, proxies, and so on, the entire internet _infrastructure_ is massive. I interpreted Philip's statement as referring to the tools to implement a specific application or site, not the supporting infrastructure below that makes the implementation possible. With a REST architecture, it has been possible to build out that web infrastructure over time, using whatever tools or frameworks are appropriate for each site and it's content. Some sites use simple tools, some use complex frameworks. Many started simple and grew over time. But they all interoperate nicely, with a low cost of entry in terms of whats required to get started. Tools and frameworks are good things, the option to choose the appropriate tool for the job is even better. That is a REST advantage. > Of course, now the software industry will now get to compete with a > more clueful bunch of MS HTTP products, instead of the WS-Bogus > nonsense you could immediately write off and never think about again. > The Astoria stuff may not be the prettiest, but it looks to me like it > will *actually work* without an army of consultants, which is a big > ch ange. > There definitely seems to be a shift to REST from WS-*. With the internet as a working reference implementation, I can see why. mike -- mikeyp@... http://www.snaplogic.org
On Thu, 2007-05-03 at 09:26 -0700, Mike Pittaro wrote:
> With a REST architecture, it has been possible to build out that web
> infrastructure over time, using whatever tools or frameworks are
> appropriate for each site and it's content. Some sites use simple
> tools, some use complex frameworks. Many started simple and grew over
> time. But they all interoperate nicely, with a low cost of entry in
> terms of whats required to get started. Tools and frameworks are good
> things, the option to choose the appropriate tool for the job is even
> better. That is a REST advantage.
Is it? I think it's more a function of open (and libre-Free ones)
specifications and protocols. Though REST does help to some degree.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
On 5/3/07, Josh Sled <jsled@...> wrote: > On Thu, 2007-05-03 at 09:26 -0700, Mike Pittaro wrote: > > With a REST architecture, it has been possible to build out that web > > infrastructure over time, using whatever tools or frameworks are > > appropriate for each site and it's content. Some sites use simple > > tools, some use complex frameworks. Many started simple and grew over > > time. But they all interoperate nicely, with a low cost of entry in > > terms of whats required to get started. Tools and frameworks are good > > things, the option to choose the appropriate tool for the job is even > > better. That is a REST advantage. > > Is it? I think it's more a function of open (and libre-Free ones) > specifications and protocols. Though REST does help to some degree. Sure. REST helps by providing oodles of simplicity. Mark.
All,
I'm looking to see how best to implement REST-compatible
authentication/authorization that works with AOL's OpenAuth service.
The service provides ways for users to authenticate themselves and to
grant permissions to services to do things such as read buddy lists on
behalf of a user. These permissions are encapsulated in a portable
token which can be passed around.
Thus, the primary requirements are to get clients to pass a token (which
combines authentication and authorization) when attempting a method
against a resource; and to signal auth(.*) failures in a reasonable way.
Windows Live and GData both implement custom WWW-Authenticate: header
schemes, and unfortunately they don't follow exactly the same pattern,
or I'd just copy it. So here's my current thoughts:
(1) Clients provide an Authorization: header if they have a token. The
format is:
Authorization: OpenAuth token="..."
where ... indicates base64-encoded token data (an opaque string for
purposes of this discussion).
(2) When there is a problem, or the Authorization: header is missing, a
401 response is returned with a WWW-Authenticate: header.
401 Need user consent
...
WWW-Authenticate: OpenAuth realm="AOL", fault="NeedConsent",
url="http://my.screenname.aol.com/blah?a=boof&b=zed&...."
where the status code contains a human readable message, and the
WWW-Authenticate OpenAuth header contains the precise fault code, one of
{NeedToken, NeedConsent, ExpiredToken}. If present, the url parameter
gives the URL of an HTML page which can be presented to the end user to
mitigate the problem according to certain criteria documented
elsewhere. For example it can point to a permissions page which lets
the user grant permission to a service to perform a POST. More likely
it would point to a login page.
Critiques are welcomed.
Thanks,
--
Abstractioneer <http://feeds.feedburner.com/aol/SzHO>John Panzer
System Architect
http://abstractioneer.org
Mike Pittaro wrote: > There definitely seems to be a shift to REST from WS-*. With the > internet as a working reference implementation, I can see why. s/internet/Web/
On 5/3/07, Robert Sayre <sayrer@...> wrote: > The Astoria stuff may not be the prettiest, but it looks to me like it > will *actually work* without an army of consultants, which is a big > change. I thought requiring IBM Global Services was actually an official requirement of many WS-* specifications. Whatever you say about MS, their interest is in ensuring that Windows remains the platform for client and server development, that the MS office server suite is the back end for future apps, and that MS office file formats remain in charge. The format for documents is a lot more strategic to them than how you upload it to a service. They've also had RESTy stuff in the past, in Exchange, for example. The company embraced SOAP but it was, in the early days, DCOM in XML. -steve p.s, say what you like about SOAP, but in REST, the enemy of GET is the proxy server that thinks it knows better. The one that returns 200+text/html when the far end 401s on you. The one that caches stuff for weeks, even when the TTL is seconds. The one that caches an incomplete download and serves up to other callers.
On 5/1/07, Mark Baker <distobj@...> wrote: > LOL. These guys tried to recruit me about a couple years ago (Pablo > Castro, in fact - mentioned on the page). I kept telling them they > needed to put an HTTP interface on their stuff, and they just couldn't > grok it. No, they didn't realize who I was. 8-) I visited building 42 a few years back to explore gainful employment opportunities, but they threw me out before lunch. This was while I still thought SOAP was a good idea, so the reason I was marched off the premises was competence rather than ideology. With hindsight, it was a near miss, even if the skiing there is better than in the UK. -steve
"Steve Loughran" <steve.loughran.soapbuilders@...> writes: > p.s, say what you like about SOAP, but in REST, the enemy of GET is > the proxy server that thinks it knows better. The one that returns > 200+text/html when the far end 401s on you. The one that caches stuff > for weeks, even when the TTL is seconds. The one that caches an > incomplete download and serves up to other callers. Errmmmm... isn't this true of SOAP as well? A proxy that was that badly implemented (and I agree that some proxies ARE that badly implemented) would fek up a SOAP call as well. -- Nic Ferrier http://www.tapsellferrier.co.uk
On 4-May-07, at 6:01 AM, Steve Loughran wrote: > On 5/3/07, Robert Sayre <sayrer@...> wrote: > > > The Astoria stuff may not be the prettiest, but it looks to me > like it > > will *actually work* without an army of consultants, which is a big > > change. > > I thought requiring IBM Global Services was actually an official > requirement of many WS-* specifications. > > Whatever you say about MS, their interest is in ensuring that Windows > remains the platform for client and server development, that the MS > office server suite is the back end for future apps, and that MS > office file formats remain in charge. Yes. That is their one and only strategy for staying in business, and is the lens through which all of their products or announcements should be viewed. --T > The format for documents is a > lot more strategic to them than how you upload it to a service. > ...
Nic James Ferrier wrote: > > > "Steve Loughran" <steve.loughran.soapbuilders@... > <mailto:steve.loughran.soapbuilders%40gmail.com>> writes: > > > p.s, say what you like about SOAP, but in REST, the enemy of GET is > > the proxy server that thinks it knows better. The one that returns > > 200+text/html when the far end 401s on you. The one that caches stuff > > for weeks, even when the TTL is seconds. The one that caches an > > incomplete download and serves up to other callers. > > Errmmmm... isn't this true of SOAP as well? > > A proxy that was that badly implemented (and I agree that some proxies > ARE that badly implemented) would fek up a SOAP call as well. Not in the good old days when all SOAP calls ran over POST. cheers Bill
On Thu, 2007-05-03 at 15:15 -0400, Mark Baker wrote:
> On 5/3/07, Josh Sled <jsled@...> wrote:
> > On Thu, 2007-05-03 at 09:26 -0700, Mike Pittaro wrote:
> > > With a REST architecture, it has been possible to build out that web
> > > infrastructure over time, using whatever tools or frameworks are
> > > appropriate for each site and it's content. Some sites use simple
> > > tools, some use complex frameworks. Many started simple and grew over
> > > time. But they all interoperate nicely, with a low cost of entry in
> > > terms of whats required to get started. Tools and frameworks are good
> > > things, the option to choose the appropriate tool for the job is even
> > > better. That is a REST advantage.
> >
> > Is it? I think it's more a function of open (and libre-Free ones)
> > specifications and protocols. Though REST does help to some degree.
>
> Sure. REST helps by providing oodles of simplicity.
That's true. I underestimated this.
The full combination of things REST defines (and HTTP provides) are
quite complex. That is, a trivial GET is straightforward, but add in:
- specified operation semantics
- cache-control headers
- separate content- and transfer-encoding
- content negotiation
- language negotiation
- conditional requests
- ranged requests
- media types
- hierarchical status codes
... it's certainly more complex than a naïve "send serialized operation
& arguments, get serialized response" protocol could be.
Simplicity comes throughout the usage spectrum, though. At the "far"
end, the overall system after you sum up all the features that those
add-ins support is far simpler than other approaches.
But at the other side – and more importantly for thinking about the
growth of the web, I think – it also "degrades" simply: you really can
just say:
$ echo -ne "GET /foo HTTP/1.1\r\nHost: host\r\n\r\n" | nc host 80
and it will work. And many solutions can be implemented with requests
on this order of complexity.
I was thinking that at this "small" side – which imho is more important
w.r.t. the growth of the web – it's not so much about REST, as it is
that HTTP and HTML are open. But, the interoperable growth of the web
is due to both RESTful uniform operations with clearly defined behavior
for intermediaries (especially for the "simple" requests). But I think
the ability to read and implement open specs for those semantics is
pretty important, as well.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
On 4-May-07, at 11:59 AM, Josh Sled wrote: > ... it also "degrades" simply: you really can > just say: > > $ echo -ne "GET /foo HTTP/1.1\r\nHost: host\r\n\r\n" | nc host 80 Pedantically speaking, so will: $ curl http://host/foo
On Fri, 2007-05-04 at 12:28 -0300, Toby Thain wrote:
> On 4-May-07, at 11:59 AM, Josh Sled wrote:
>
> > ... it also "degrades" simply: you really can
> > just say:
> >
> > $ echo -ne "GET /foo HTTP/1.1\r\nHost: host\r\n\r\n" | nc host 80
>
> Pedantically speaking, so will:
> $ curl http://host/foo
Indeed. I wanted to be a level lower than curl/wget, even. :)
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
* Toby Thain <toby@...> [2007-05-04 17:30]: > On 4-May-07, at 11:59 AM, Josh Sled wrote: > > > ... it also "degrades" simply: you really can just say: > > > > $ echo -ne "GET /foo HTTP/1.1\r\nHost: host\r\n\r\n" | nc host 80 > > Pedantically speaking, so will: > $ curl http://host/foo Yeah, but curl is explicitly an HTTP client, whereas nc doesn’t implement anything beyond TCP. However, HTTP sticks to the essentials so well that you need nothing more than a TCP implementation to get useful work done. Sure, your code won’t be very generic – in fact it will implement almost none of the spec. But it doesn’t have to if you don’t need it to. HTTP scales *down* as effortlessly as it scales up. Which I think was Josh’s point. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 4-May-07, at 2:19 PM, A. Pagaltzis wrote: > * Toby Thain <toby@smartgames.ca> [2007-05-04 17:30]: > > On 4-May-07, at 11:59 AM, Josh Sled wrote: > > > > > ... it also "degrades" simply: you really can just say: > > > > > > $ echo -ne "GET /foo HTTP/1.1\r\nHost: host\r\n\r\n" | nc host 80 > > > > Pedantically speaking, so will: > > $ curl http://host/foo > > Yeah, but curl is explicitly an HTTP client, whereas nc doesnt > implement anything beyond TCP. Yeah, I didn't get Josh's point at first. Since corrected. :-) > > However, HTTP sticks to the essentials so well that you need > nothing more than a TCP implementation to get useful work done. ...
Mark Baker wrote: > > > On 5/3/07, Josh Sled <jsled@... > <mailto:jsled%40asynchronous.org>> wrote: > > On Thu, 2007-05-03 at 09:26 -0700, Mike Pittaro wrote: > > > With a REST architecture, it has been possible to build out that web > > > infrastructure over time, using whatever tools or frameworks are > > > appropriate for each site and it's content. Some sites use simple > > > tools, some use complex frameworks. Many started simple and grew over > > > time. But they all interoperate nicely, with a low cost of entry in > > > terms of whats required to get started. Tools and frameworks are good > > > things, the option to choose the appropriate tool for the job is even > > > better. That is a REST advantage. > > > > Is it? I think it's more a function of open (and libre-Free ones) > > specifications and protocols. Though REST does help to some degree. > > Sure. REST helps by providing oodles of simplicity. I agree with Josh; what he's pointing at are the basis for deriving real world practical things like view source. I think REST is simple too, but where it really helps is the way it organizes simplicity (no free lunch for architectural styles). cheers Bill
On 5/4/07, Bill de hOra <bill@...> wrote: > Nic James Ferrier wrote: > > > > > > "Steve Loughran" <steve.loughran.soapbuilders@... > > <mailto:steve.loughran.soapbuilders%40gmail.com>> writes: > > > > > p.s, say what you like about SOAP, but in REST, the enemy of GET is > > > the proxy server that thinks it knows better. The one that returns > > > 200+text/html when the far end 401s on you. The one that caches stuff > > > for weeks, even when the TTL is seconds. The one that caches an > > > incomplete download and serves up to other callers. > > > > Errmmmm... isn't this true of SOAP as well? > > > > A proxy that was that badly implemented (and I agree that some proxies > > ARE that badly implemented) would fek up a SOAP call as well. > > Not in the good old days when all SOAP calls ran over POST. > Which is, IMO, the only way to run SOAP. two-way synchronous operations, not WS-A-addressed one-way calls that aren't even allowed to let you know you just tried to post ill-formed XML. There is a least one GET-related bug in classic Axis1.x. The happyaxis.jsp page is designed to perform passive system diagnostics. Although originally meant to for loadbalancing routers to use, it has become the normal way to check that Axis is installed. Indeed, a search for "Axis Happiness Page" will show you the internals of many axis installations out there. Its a JSP page, and is designed to be (a) entirely standalone and (b) easily extensible by users. So there are variations for things like apache muse and other SOAP-based services. But, the little JSP page doesnt set the cache expiry info. So while it works well for dev systems on the local net, if you go live with something beyond the firewall, or if you are trying to do interop tests against remote systems, checking for happyaxis only shows you if the proxy server has, at some point in the past, been publishing a status page. now we have a proxy server that caches things for a couple of days, and if the server at the far end is not there, it serves up the old stuff without any warning that the content is really, really out of date. Which makes checking the base happyaxis.jsp page useless, unless you tack on some random query string at the end, which is what I ended up doing: http://smartfrog.svn.sourceforge.net/viewvc/smartfrog/trunk/core/components/deployapi/build.xml?view=markup And do you know who it was that left the cache-expiry headers out the JSP page? It was me. So now I am suffering because I forgot to add it four years ago, and the JSP page and derivatives are out in the wild and I have to test against them. I dont think the ?WSDL pages set cache expiry information either. Oops. Summary: even if SOAP over POST itself doesnt have proxy cache problems, the other bits of the stack can do it. And, because the people writing SOAP stacks tend to live in the SOAP world rather than the depths of HTTP, they tend to forget about some of the details of layers underneath, like the effect of proxies. -steve
Steve Loughran wrote: > Its a JSP page, and is designed to be (a) entirely standalone and (b) > easily extensible by users. So there are variations for things like > apache muse and other SOAP-based services. > > But, the little JSP page doesnt set the cache expiry info. So while it > works well for dev systems on the local net, if you go live with > something beyond the firewall, or if you are trying to do interop > tests against remote systems, checking for happyaxis only shows you if > the proxy server has, at some point in the past, been publishing a > status page. Not even remote. I've been bitten by the same kind of problem in production: - when someone set an admin page for Servlet timer to stop/start using GET (once it didn't start at all resulting an a very awkward meeting; another time it didn't stop, so two timers ended up running after a 'restart' screwing up the internal app state). - when someone set a batch job control using an .asp page to be started using GET (the browser returned from cache, never got to the origin server). - when the Tomcat team thinks you should manage webapp lifecycles using links. Tomcat is the Servlets *RI* for heaven's sake. I was listening to drunkandretired 93 a few nights ago. It seems the Rails crowd still think GWA is evil broken software, and the Rails use of GET links is fine (this months after the "world of resources" shift). I said last week how could people not understand REST until now, that I thought it was willful disregard by the technical community due to commercial 'on message' pressures; and now those are fading, people can out safely. But I wonder. cheers Bill
Hi, I've got a resource that acts like a map, with the path elements below the resource being keys to the map. So a request like: GET /foo/map/key would return the value associated with the key "key" in the map. I'd also like to be able to get the numbers of keys in the map, so it would be nice to use: GET /foo/map#size But, in general, clients don't send fragments to servers, so "#size" would never get to the server. And of course GET /foo/map/size doesn't work, because then "size" can't also be a valid key. Most alternatives I've come up with are ugly. Any recommendations on how to address metadata without hacks? eric.
In our REST framework for .Net, we opted to use '@' as prefix for built-in paths. So, you would get: GET /foo/map/@count Not perfect, but it does the trick. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org On May 6, 2007, at 10:53 PM, Eric Busboom wrote: > Hi, > > I've got a resource that acts like a map, with the path elements > below the resource being keys to the map. So a request like: > > GET /foo/map/key > > would return the value associated with the key "key" in the map. I'd > also like to be able to get the numbers of keys in the map, so it > would be nice to use: > > GET /foo/map#size > > But, in general, clients don't send fragments to servers, so "#size" > would never get to the server. And of course > > GET /foo/map/size > > doesn't work, because then "size" can't also be a valid key. Most > alternatives I've come up with are ugly. Any recommendations on how > to address metadata without hacks? > > eric. > >
Steve Bjorg wrote: > In our REST framework for .Net, we opted to use '@' as prefix for > built-in paths. So, you would get: > > GET /foo/map/@count > > Not perfect, but it does the trick. Except that unescaped '@' is a reserved character in URLs.
On 5/7/07, Eric Busboom <eric@...> wrote: > > Hi, > > I've got a resource that acts like a map, with the path elements > below the resource being keys to the map. So a request like: > > GET /foo/map/key > > would return the value associated with the key "key" in the map. I'd > also like to be able to get the numbers of keys in the map, so it > would be nice to use: > > GET /foo/map#size You could drive it with content-negotiation: GET /foo/map Accept: application/x-map-size This won't work for a browser UAs, so an alternative is: GET /foo/map.size Alan Dean http://thoughtpad.net/alan-dean
Hi Eric, can the size be manipulated as a standalone resource? It doesn’t seem so to me – it would always be a function of how many keys there are, right? So in that case, asking for the size of the map is just a different view of the map as a whole, right? So, why not use the query string? (Or possibly a parameter; I’m not yet sure how and when to use those.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
[ Attachment content not displayed ]
Bill de hOra wrote: > I recall Mark Baker and James Strachan talking about paradigms and > mental gear shifts a few years back when it comes to 'getting' REST. I > mean seriously, what is there to get? How can an *entire industry* not > get REST until Q406 or thereabouts? I don't buy it. I think the wheels > are falling off the WS industry wagon, and SOA will be next. We're > witnessing one of those once a decade industry re-alignments. I often think that REST can be difficult for people to understand because it's so simple. My experience as someone who was (and still is) mainly a web person was that REST codified a lot of stuff I'd learned the hard way in much the same way that when one learns a new software pattern it's often something one has already used many times, but now it has a name it's easier to think about. With that background when I had to make two computers talk to each other over an HTTP connection I'd come up with something relatively RESTful without a second thought. To someone with a different background of overcoming different problems, it could indeed seem very foreign. The mental block that gets me isn't so much the one over "how does this work?" but the obstinate belief that the web will never really work despite all of the evidence to the contrary. I sometimes feel like Johnson addressing Bishop Berkeley's theory of the non-existence of matter by kicking a stone and saying "I refute it thus". Plenty of people on this list had made computers talk to each other over HTTP and in accordance with HTTP before there was any hype around "REST", but people still insist that the web doesn't work and we need rubbish like SOAP to "fix" it, so we need something with an actual name before they can even dare to belief in it. Similarly, in other aspects of the web we have "Web2.0" which, as far as I can see, is the radical notion that the technology we've all been using for over 15 years might actually work and maybe we should just use it rather than trying to win the glory of being the person who fixes it.
Sorry, but you are mistaken. It's not a reserved character for path segments. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org On May 7, 2007, at 12:35 AM, Chris Burdess wrote: > Steve Bjorg wrote: >> In our REST framework for .Net, we opted to use '@' as prefix for >> built-in paths. So, you would get: >> >> GET /foo/map/@count >> >> Not perfect, but it does the trick. > > Except that unescaped '@' is a reserved character in URLs. > >
REST seems to encompass two orthogonal concepts. The first is its HTTP heritage with headers, status codes, etc. (and is under- appreciated or misunderstood, hence things like SOAP). The second, though, is a design paradigm like OO that prescribes how state and transitions are captured in representations that are exchanged during the request-response pattern. Tim's "aha!" moment was about the second. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org On May 7, 2007, at 7:04 AM, Jon Hanna wrote: > Bill de hOra wrote: > > I recall Mark Baker and James Strachan talking about paradigms and > > mental gear shifts a few years back when it comes to 'getting' > REST. I > > mean seriously, what is there to get? How can an *entire > industry* not > > get REST until Q406 or thereabouts? I don't buy it. I think the > wheels > > are falling off the WS industry wagon, and SOA will be next. We're > > witnessing one of those once a decade industry re-alignments. > > I often think that REST can be difficult for people to understand > because it's so simple. > > My experience as someone who was (and still is) mainly a web person > was > that REST codified a lot of stuff I'd learned the hard way in much the > same way that when one learns a new software pattern it's often > something one has already used many times, but now it has a name it's > easier to think about. > > With that background when I had to make two computers talk to each > other > over an HTTP connection I'd come up with something relatively RESTful > without a second thought. To someone with a different background of > overcoming different problems, it could indeed seem very foreign. > > The mental block that gets me isn't so much the one over "how does > this > work?" but the obstinate belief that the web will never really work > despite all of the evidence to the contrary. I sometimes feel like > Johnson addressing Bishop Berkeley's theory of the non-existence of > matter by kicking a stone and saying "I refute it thus". > > Plenty of people on this list had made computers talk to each other > over > HTTP and in accordance with HTTP before there was any hype around > "REST", but people still insist that the web doesn't work and we need > rubbish like SOAP to "fix" it, so we need something with an actual > name > before they can even dare to belief in it. Similarly, in other aspects > of the web we have "Web2.0" which, as far as I can see, is the radical > notion that the technology we've all been using for over 15 years > might > actually work and maybe we should just use it rather than trying to > win > the glory of being the person who fixes it. > > >
Jon Hanna <jon@...> writes: > Plenty of people on this list had made computers talk to each other over > HTTP and in accordance with HTTP before there was any hype around > "REST", but people still insist that the web doesn't work and we need > rubbish like SOAP to "fix" it, so we need something with an actual name > before they can even dare to belief in it. Similarly, in other aspects > of the web we have "Web2.0" which, as far as I can see, is the radical > notion that the technology we've all been using for over 15 years might > actually work and maybe we should just use it rather than trying to win > the glory of being the person who fixes it. Hear! Hear! -- Nic Ferrier http://www.tapsellferrier.co.uk
Steve Bjorg wrote: > REST seems to encompass two orthogonal concepts. The first is its HTTP > heritage with headers, status codes, etc. (and is under-appreciated or > misunderstood, hence things like SOAP). The second, though, is a design > paradigm like OO that prescribes how state and transitions are captured > in representations that are exchanged during the request-response > pattern. Tim's "aha!" moment was about the second. It's solely the second. The first part is how it does that, and the decisions as to how they work (e.g. which pieces of information we do and do not put in headers) comes entirely from how well that lets us do the second. While HTTP predates REST, REST was developped by examining HTTP and seeing what worked and what needed fixing in version 1.1. Again my analogy with a programmer learning a design pattern s/he has already used works here too. It's hence not correct to talk of the HTTP matters as being part of REST's heritage, IMO, but more accurate to describe these HTTP matters as being determined by following REST, albeit retroactively.
What does GET /foo/map/ (or GET /foo/map) return? --Chuck On 5/7/07, Eric Busboom <eric@...> wrote: > Hi, > > I've got a resource that acts like a map, with the path elements > below the resource being keys to the map. So a request like: > > GET /foo/map/key > > would return the value associated with the key "key" in the map. I'd > also like to be able to get the numbers of keys in the map, so it > would be nice to use: > > GET /foo/map#size > > But, in general, clients don't send fragments to servers, so "#size" > would never get to the server. And of course > > GET /foo/map/size > > doesn't work, because then "size" can't also be a valid key. Most > alternatives I've come up with are ugly. Any recommendations on how > to address metadata without hacks? > > eric. > > > > Yahoo! Groups Links > > > >
On May 7, 2007, at 2:23 AM, A. Pagaltzis wrote: > Hi Eric, > > can the size be manipulated as a standalone resource? It doesnt > seem so to me So, why not use the query string? Aristotle, Your right, it is not really an independent resource, but addressing dependent resources is a common case, such as where GET /foo/person returns XML for all information about a person and GET /foo/person/firstName returns just the person's first name. If "size" is a subordinate of the map, it should, I think, be addressed with a path. Steve Bjorg says: > In our REST framework for .Net, we opted to use '@' as prefix for > built-in paths. So, you would get: > > GET /foo/map/@count Here the count is a path element, but how do you distinguish it from the key "@count"? URI encoding or some other escaping would certainly work, but that is a source of bugs when a caller forgets to escape the "@". And, it gives escaping or not escaping a semantic character, which feels wrong to me. The alternative, disallowing "@" as the first character in a key would be a fine solution for specific implementations, but I'd like to keep this model general, so restricting the set of valid characters in a key is unpalatable. Alan Dean says: > You could drive it with content-negotiation: > > GET /foo/map > Accept: application/x-map-size As with the query string solution, I much prefer that the size appear as part of the URL. This use does not seem consistent with the intent of the Accept: header. Alan Dean also says: > This won't work for a browser UAs, so an alternative is: > > GET /foo/map.size This is the solution that I'd prefer, although perhaps with a different character: "map$size" or "map!size". The delimiter character is very unusual inside path elements in the REST and RESTish interfaces I've studied. In this case, the size is not really a path, but it does appear in the URI and the URI does address a resource that is logically connected to the map. eric.
On May 7, 2007, at 9:14 AM, Jon Hanna wrote: > Steve Bjorg wrote: >> REST seems to encompass two orthogonal concepts. The first is its >> HTTP heritage with headers, status codes, etc. (and is under- >> appreciated or misunderstood, hence things like SOAP). The >> second, though, is a design paradigm like OO that prescribes how >> state and transitions are captured in representations that are >> exchanged during the request-response pattern. Tim's "aha!" >> moment was about the second. > > It's solely the second. > I have a hard time accepting this absolute statement. Most discussions on this list revolve around how to use http methods, status codes, and headers to map REST concepts into HTTP. While REST might be conceptually larger, it is bounded by its HTTP heritage. While they aren't the same, they are Siamese twins. In other words, would you give a talk about REST and never mention HTTP and its methods, codes, and headers? - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
On May 7, 2007, at 9:44 AM, Chuck Hinson wrote: > What does GET /foo/map/ (or GET /foo/map) return? If the server wants to allow it, it would return the entire map. Certainly, this isn't reasonable in all cases, such as where the "map" takes DNS names as keys and returns IP addresses; a 100M entry response is probably not good for server performance. For small maps, you could return the whole data set and let the client compute the size, but that's not a general solution. Also, it isn't just the "size" that I'd like to get. There are also: /map/values The set of values /map/keys The set of keys and other possibilities for other data types, like sequences or objects. eric.
On May 7, 2007, at 9:47 AM, Eric Busboom wrote: > Steve Bjorg says: > >> In our REST framework for .Net, we opted to use '@' as prefix for >> built-in paths. So, you would get: >> >> GET /foo/map/@count > > Here the count is a path element, but how do you distinguish it from > the key "@count"? > > URI encoding or some other escaping would certainly work, but that > is a source of bugs when a caller forgets to escape the "@". And, > it gives escaping or not escaping a semantic character, which feels > wrong to me. > > The alternative, disallowing "@" as the first character in a key > would be a fine solution for specific implementations, but I'd like > to keep this model general, so restricting the set of valid > characters in a key is unpalatable. You are correct, we made the design guideline that '@' should only be used for pre-defined suffixes. You can always add an escape mechanism if need be. In the end, it was about usability for us. Using a legal prefix is simple and intuitive. But, it's a compromise as well. So far, this compromise has worked out well for us. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Would HEAD be another option? If the element count is always be returned as a header, this method would be appropriate. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org On May 7, 2007, at 10:22 AM, Eric Busboom wrote: > > On May 7, 2007, at 9:44 AM, Chuck Hinson wrote: > > > What does GET /foo/map/ (or GET /foo/map) return? > If the server wants to allow it, it would return the entire map. > Certainly, this isn't reasonable in all cases, such as where the > "map" takes DNS names as keys and returns IP addresses; a 100M entry > response is probably not good for server performance. > > For small maps, you could return the whole data set and let the > client compute the size, but that's not a general solution. Also, it > isn't just the "size" that I'd like to get. There are also: > > /map/values The set of values > /map/keys The set of keys > > and other possibilities for other data types, like sequences or > objects. > > eric. > > >
On May 7, 2007, at 10:34 AM, Steve Bjorg wrote: > Would HEAD be another option? If the element count is always be > returned as a header, this method would be appropriate. Possibly, but it isn't entirely cohesive with the intent of headers. I think that the headers should be reserved for information that supports transfer of a representation, not for a representation itself. Another possibility is OPTION, in the sense that OPTION is supposed to return information about a resource. But it still feels weak. An other possibility is to create a new method, like "META". With a new method, GET /foo/map/size Refers to the key "size" and META /foo/map/size Refers to the size of a map. But, I've not seen creating new HTTP methods proposed as a solution to anything on this list. Do the list members have a disinclination to do that? Sure, you lose the caching you might get with GET, but that did not kill PROPFIND. For my own part, creating new methods seems like a slippery slope, and we certainly don't want a proliferation of new methods as a solution to every problem. However, WebDAV's new methods didn't open the gates of hell, so perhaps there is room for a few more. I suspect that it would be better to create a new method than to overload and abuse old ones. eric.
On May 7, 2007, at 9:59 AM, Steve Bjorg wrote: > On May 7, 2007, at 9:14 AM, Jon Hanna wrote: > > > Steve Bjorg wrote: > >> REST seems to encompass two orthogonal concepts. The first is its > >> HTTP heritage with headers, status codes, etc. (and is under- > >> appreciated or misunderstood, hence things like SOAP). The > >> second, though, is a design paradigm like OO that prescribes how > >> state and transitions are captured in representations that are > >> exchanged during the request-response pattern. Tim's "aha!" > >> moment was about the second. > > > > It's solely the second. > > > > I have a hard time accepting this absolute statement. Er, well, sometimes it is better to have a hard time. The former is web architecture and the latter is REST. > Most > discussions on this list revolve around how to use http methods, > status codes, and headers to map REST concepts into HTTP. While REST > might be conceptually larger, it is bounded by its HTTP heritage. Bounded? No way. It isn't even 1/3rd of the style. > While they aren't the same, they are Siamese twins. In other words, > would you give a talk about REST and never mention HTTP and its > methods, codes, and headers? Yes, I have, though most of the time I give talks about the evolution of web architecture along with REST. That said, the purpose of this list (as Mark set it up) is to offer both discussion of the style and implementation advice regarding web architecture. That's why there is so much discussion of HTTP. I think people do a real disservice to the reader when they start mixing up architecture and architectural style. REST, as a style, exists independent of the Web's timeline. HTTP, in contrast, has to be viewed through the limitations of particular implementations. REST wouldn't be a very good abstraction if its understanding depended on the details of today's implementations (which are far less limited than those of 1997, 1995, and 1993). ....Roy
On May 7, 2007, at 10:50 AM, Eric Busboom wrote: > For my own part, creating new methods seems like a slippery slope, > and we certainly don't want a proliferation of new methods as a > solution to every problem. However, WebDAV's new methods didn't open > the gates of hell, so perhaps there is room for a few more. I > suspect that it would be better to create a new method than to > overload and abuse old ones. WebDAV most certainly opened the gates of hell. Anyone who has read the versioning or ACL specs should realize that by now. ....Roy
Steve Bjorg wrote: > > [...] would you give a talk about REST and never mention HTTP and its > methods, codes, and headers? > Yes, if one was somewhat time constrained and the presentation was to an audience that was sufficiently familiar with software architecture. An extended version of the same talk would /only/ introduce the WEB and basic HTTP methods in limited examples; probably not include such details as status codes and headers to avoid muddying the waters. Having given this talk (or similar) a couple of times, I can vouch for the effectiveness of communicating the essential aspects of the style without (much) reference to implementation. In one instance, I used CREATE, READ, WRITE and REMOVE in place of POST, GET, PUT and DELETE to great success. It was clear, of course, to several members of the audience what was going on. - Elias
On May 7, 2007, at 11:14 AM, Roy T. Fielding wrote: > On May 7, 2007, at 10:50 AM, Eric Busboom wrote: >> solution to every problem. However, WebDAV's new methods didn't open >> the gates of hell, so perhaps there is room for a few more. I > > WebDAV most certainly opened the gates of hell. Anyone who has read > the versioning or ACL specs should realize that by now. Great ... two more specs I've got to read ... :( Are the problems with WebDAV's versioning and ACL related to the additional methods? How? Using a new method for information about the map, like META, isn't really a good option anyway. Getting the size with it might be OK, but it would also be useful, for some resources, to get the set of keys or the set of values. In that case: META /foo/map/values isn't appealing, because you'd be getting a set of values, and I'd expect that to come with a GET, and the set of values certainly is not metadata. eric.
Here's two recommendations to mull over. One: GET /foo/map/key (gets map) GET /foo/map/meta/size GET /foo/map/meta/values GET /foo/map/meta/keys GET /foo/map/meta (all metadata) Two (not far from your original) GET /foo/map/key (gets map) GET /foo/map;size GET /foo/map;values GET /foo/map;keys GET /foo/map;meta (all metadata) Pete Eric Busboom wrote: > An other possibility is to create a new method, like "META". With a > new method, > > GET /foo/map/size Refers to the key "size" and > META /foo/map/size Refers to the size of a map.
On May 7, 2007, at 11:11 AM, Roy T. Fielding wrote: > On May 7, 2007, at 9:59 AM, Steve Bjorg wrote: >> On May 7, 2007, at 9:14 AM, Jon Hanna wrote: >> >> > Steve Bjorg wrote: >> >> REST seems to encompass two orthogonal concepts. The first is its >> >> HTTP heritage with headers, status codes, etc. (and is under- >> >> appreciated or misunderstood, hence things like SOAP). The >> >> second, though, is a design paradigm like OO that prescribes how >> >> state and transitions are captured in representations that are >> >> exchanged during the request-response pattern. Tim's "aha!" >> >> moment was about the second. >> > >> > It's solely the second. >> > >> >> I have a hard time accepting this absolute statement. > > Er, well, sometimes it is better to have a hard time. The former > is web architecture and the latter is REST. Thanks for settling this. > >> Most >> discussions on this list revolve around how to use http methods, >> status codes, and headers to map REST concepts into HTTP. While REST >> might be conceptually larger, it is bounded by its HTTP heritage. > > Bounded? No way. It isn't even 1/3rd of the style. Can you cite an example of what you mean? > >> While they aren't the same, they are Siamese twins. In other words, >> would you give a talk about REST and never mention HTTP and its >> methods, codes, and headers? > > Yes, I have, though most of the time I give talks about the evolution > of web architecture along with REST. > > That said, the purpose of this list (as Mark set it up) is to offer > both discussion of the style and implementation advice regarding > web architecture. That's why there is so much discussion of HTTP. > > I think people do a real disservice to the reader when they start > mixing up architecture and architectural style. REST, as a style, > exists independent of the Web's timeline. HTTP, in contrast, has to > be viewed through the limitations of particular implementations. > REST wouldn't be a very good abstraction if its understanding > depended on the details of today's implementations (which are far > less limited than those of 1997, 1995, and 1993). > > ....Roy > > Ok, so here is my conundrum: I was invited to give a short presentation on REST following a presentation on SOAP (talk about the one-eyed guiding the blind! As a side point, I did refer them to you first :) ). Since I only have 20 minutes to convey of a few key points, I can either focus on the underlying richness of HTTP and how it can be tapped by applications (I think it would be a great service to web-service authors to introduce them to the magic of caching and simple redirects) or, if I stay true to the topic, talk about application architecture and distribution of state and stateless protocols. Either topic would probably be enlightening to many, but it's undeniably clear that the latter is what I need to convey to stay on topic. Is there a key message you focus on as the key take away for your talks on REST? - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Jon" == Jon Hanna <jon@...> writes:
Jon> I often think that REST can be difficult for people to
Jon> understand because it's so simple.
I think it is more the difference between computer programs (only
function calls) and distributed architectures. Since the inception of
the second computer people have tried to apply the functional call
paradigm across different computers. And it simply doesn't work. A
remote call isn't the same as a local call.
So I predict that whatever death may befall SOAP or WS, there will be
another architecture attempting to do exactly that.
For legions of programmers, there's only the function call. They will
simply never grasp distributed architectures.
And don't forget the lure of IDE vendors who can make things that are
very complex and turn it into point and click. Hardly ever anyone asks
why we needed to develop complex things in the first place.
- --
All the best,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your outdated email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGP3vzIyuuaiRyjTYRAs+6AJwLb++jRJszgwPIay/VYtqRvVd+xQCg1qcN
4lLJf3MeVXsdBlMh6SHitmM=
=5/sB
-----END PGP SIGNATURE-----
Explicitly exposing "map" is leaking metadata. An earlier example showed something like this: GET /foo/persons This is actually on the right course. Let me generalize it as: GET /foo/collection Now simply add a meta-tag called something like "content" that gets you to the real collection contents, as you specified originally. Other subsequent tags get you other metadata. GET /foo/collection/content (returns raw map) GET /foo/collection/content/keyValue (returns value based on key) GET /foo/collection/size (returns size of collection) GET /foo/collection/type (returns "map") GET /foo/collection/keys (returns all key values) GET /foo/collection/values (returns all values) ... > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > On Behalf Of Peter Lacey > Sent: Monday, May 07, 2007 1:49 PM > To: Eric Busboom > Cc: REST Discuss > Subject: Re: [rest-discuss] Addressing metadata > > Here's two recommendations to mull over. > > One: > GET /foo/map/key (gets map) > GET /foo/map/meta/size > GET /foo/map/meta/values > GET /foo/map/meta/keys > GET /foo/map/meta (all metadata) > > Two (not far from your original) > > GET /foo/map/key (gets map) > GET /foo/map;size > GET /foo/map;values > GET /foo/map;keys > GET /foo/map;meta (all metadata) > > Pete > Eric Busboom wrote: > > An other possibility is to create a new method, like "META". With a > > new method, > > > > GET /foo/map/size Refers to the key "size" and > > META /foo/map/size Refers to the size of a map. > > > > > Yahoo! Groups Links > > >
So you could do something like: /foo/map/elements refers to the collection of elements in the map /foo/map/elements/key refers a an element in the map /foo/map/size refers to the size of the map or even /foo/map which returns an xml document containing size and other metadata for the map Even /foo/mapsize is reasonable - it's just a name of a resource that tells you how many element another resource contains; you just have to have a way to tell clients the relationship between the two resources. If you really want to get fancy, define yourself a map discovery document (at /foo/map) with links to all the important resources related to your map. <description> <elements>/foo/map/elements</elements> <keySet>/foo/map/keys</keySet> <valueSet>/foo/map/values</valueSet> <mapSize>/foo/mapsize</mapSize> </description> Teach your clients to understand this document and then you can make the URIs whatever you want --Chuck On 5/7/07, Eric Busboom <eric@...> wrote: > > On May 7, 2007, at 9:44 AM, Chuck Hinson wrote: > > > What does GET /foo/map/ (or GET /foo/map) return? > If the server wants to allow it, it would return the entire map. > Certainly, this isn't reasonable in all cases, such as where the > "map" takes DNS names as keys and returns IP addresses; a 100M entry > response is probably not good for server performance. > > For small maps, you could return the whole data set and let the > client compute the size, but that's not a general solution. Also, it > isn't just the "size" that I'd like to get. There are also: > > /map/values The set of values > /map/keys The set of keys > > and other possibilities for other data types, like sequences or objects. > > eric. > > > >
On 7-mei-2007, at 21:20, Berend de Boer wrote: > I think it is more the difference between computer programs (only > function calls) and distributed architectures. Since the inception of > the second computer people have tried to apply the functional call > paradigm across different computers. And it simply doesn't work. A > remote call isn't the same as a local call. > > So I predict that whatever death may befall SOAP or WS, there will be > another architecture attempting to do exactly that. I don't like to defend SOAP (especially in this group), but it isn't a RPC mechanism; it has more similarities with messaging. That's why I always call it XML Messaging. Unfortunately, most of the vendors have created API's which completely ignore this messaging paradigm, and indeed offer RPC-like functionality. I blame the vendors, not the spec. Arjen Poutsma Interface21 E: apoutsma@... W: www.interface21.com B: blog.interface21.com/arjen
Arjen Poutsma <apoutsma@...> writes: > I don't like to defend SOAP (especially in this group), but it isn't > a RPC mechanism; it has more similarities with messaging. That's why > I always call it XML Messaging. Unfortunately, most of the vendors > have created API's which completely ignore this messaging paradigm, > and indeed offer RPC-like functionality. Pah. It started out as RPC. It just moved to messaging as soon as it became clear that it didn't really work. > I blame the vendors, not the spec. Yes. They must take a lot of the blame. -- Nic Ferrier http://www.tapsellferrier.co.uk
On 5/8/07, Nic James Ferrier <nferrier@...> wrote: > Arjen Poutsma <apoutsma@...> writes: > > > I don't like to defend SOAP (especially in this group), but it isn't > > a RPC mechanism; it has more similarities with messaging. That's why > > I always call it XML Messaging. Unfortunately, most of the vendors > > have created API's which completely ignore this messaging paradigm, > > and indeed offer RPC-like functionality. > > Pah. It started out as RPC. It just moved to messaging as soon as it > became clear that it didn't really work. Most of the Java APIs are still in the SOAP-as-RPC wold view. I have a theory for this, and, its not just 'its the vendors' fault. Actually I have two. theory one: the sapir-whorf hypothesis in action. Java and C# use RPC (or method calls on objects) because it is the thing that the languages allow you to express. If everything in in your language's world view is an object you make synchronous method calls on, that is how you view the world. languages that are more messaging-centric could help here, be they Erlang, Smalltalk or some extensions to the existing languages (ooh, continuations!) theory two: developers dont want to do distributed computing. This is my newer theory, and needs some expansion. Rather than say 'its the tool vendor's fault', I'm now suspecting that the normal developer out there is quuite happy to pretend that everything is running on a single box. That is even if their gui is a web page, possibly a fancy AJAXy page, they still like to pretend that stuff is working locally. RPC-based IPC can maintain that illusion, even if it is just a big illusion. I don't know how you test this. But I think it could be one argument about why applets, Java Web start and the like have failed to go mainstream. That is, the problems there are not just technical, but architectural. -steve > > > > I blame the vendors, not the spec. > > Yes. They must take a lot of the blame. > > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk > > > > Yahoo! Groups Links > > > >
"Steve Loughran" <steve.loughran.soapbuilders@...> writes: > theory one: the sapir-whorf hypothesis in action. > Java and C# use RPC (or method calls on objects) because it is the > thing that the languages allow you to express. If everything in in > your language's world view is an object you make synchronous method > calls on, that is how you view the world. This is patently true. Most of the language vendors don't give a fek about multi-language. Most of them are language bigots after all. It's quite a common thing that, even among non-vendors! > theory two: developers dont want to do distributed computing. > > This is my newer theory, and needs some expansion. Rather than say > 'its the tool vendor's fault', I'm now suspecting that the normal > developer out there is quuite happy to pretend that everything is > running on a single box. That is even if their gui is a web page, > possibly a fancy AJAXy page, they still like to pretend that stuff is > working locally. RPC-based IPC can maintain that illusion, even if it > is just a big illusion. I think this has something in it.... but I think it's just that people have not realized the leap they need to take yet. A lot of programmers are still way too over excited by the ideas of OOP to see the bigger picture of the web. Someone earlier said the move to REST would be a once in 10 years kind of event. I think it's more like a once in 50 years... the web is the first really scalable distributed computing platform we've had. Before, people doing DC were just mucking about. Now it's practical and programmers have to catch up to that. We'll wake up one day soon and everybody will be writing REST apps more or less badly in just the same way that OOP happened. -- Nic Ferrier http://www.tapsellferrier.co.uk
On 2 May 2007, at 09:56, rogervdkimmenade wrote: > Within the TestReport the URI of the corresponding TestRecipe is put. > The test results can be viewed by: > GET http://Roger/Tests/TestReport/TestReport12 (BTW, I don't really fancy the CamelCase style URLs, but I guess that's not as important.. and do we need to repeat 'Test' everywhere when it's under /Test ?) What about including the test run as a real resource? After all it has state (running, finished, etc) and references other resources, such as the recipe and the report. (I'm shortening the URLs and content): POST /TestRecipe <testrecipe> .. 201 Created Location: /TestRecipe/13 POST /TestRun <testrun> <testrecipe xlink:href="/TestRecipe/13" /> </> 201 Created Location: /TestRun/192 GET /TestRun/192 <testrun> <testrecipe xlink:href="/TestRecipe/13" /> <status>Running</status> .. (Could include data such as when it was started etc) GET /TestRun/192 <testrun> <testrecipe xlink:href="/TestRecipe/13" /> <status>Finished</status> <result>Failed</result> <report xlink:href="/TestRun/192/report" /> .. GET /TestRun/192/report <testreport> .. This means that if some other conditions have changed, you can post a new test run using the old test recipe URI. -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
On 8-mei-2007, at 10:21, Steve Loughran wrote: > theory one: the sapir-whorf hypothesis in action. > Java and C# use RPC (or method calls on objects) because it is the > thing that the languages allow you to express. If everything in in > your language's world view is an object you make synchronous method > calls on, that is how you view the world. Interesting theory. That would also explain why some people are doing RPC over message queues, completely ignoring the asynchronous aspect of MQ. For instance, see Lingo (http://lingo.codehaus.org/) and Spring (http://static.springframework.org/spring/docs/2.0.x/reference/ remoting.html#remoting-jms) Arjen Interface21 E: apoutsma@... W: www.interface21.com B: blog.interface21.com/arjen
Greetings. Not apropos any particular response, but the OP might want to look at ARK, which is an Internet Draft which you can find by googling "kunze ark". One of the features of that as a resource naming methodology, is that for any resource with a given URL, the same URL with a single '?' appended retrieves metadata about the resource, and the URL with a pair of them '??' retrieves a statement of the resource's persistence. The latter feature is of particular interest to the ARK spec's author, John Kunze, since one of the spec's concerns is persistent names for resources, but the discussion of resource metadata, and the approach which sees simplicity and some decenteredness as virtues, would probably strike a relevant chord here. And it doesn't require any extension of the set of HTTP verbs! All the best, Norman -- ------------------------------------------------------------------ Norman Gray : http://nxg.me.uk eurovotech.org : University of Leicester, UK
On May 7, 2007, at 11:33 AM, Eric Busboom wrote: > Are the problems with WebDAV's versioning and ACL related to the > additional methods? How? The core problem is complexity. They treat every new object as a new data type with a new set of methods to manipulate it. The REST way of doing it would be to map the data into resources and provide links between them, thereby allowing all of the existing methods to apply where needed for retrieval, updates, etc. Versioning exposes a sequence of related resources and a map of that sequence. ACLs are just a related resource that happens to influence access control on the server. What is central to Web Architecture (so central, in fact, that I forgot to even mention it as the primary design goal for REST) is that the Web is the set of resources interlinked by URIs. The problem isn't just that there are a large number of new methods in those specs, but that the new methods supplant what should have been resources that respond to GET. The result is that a user of these technologies must learn an entirely new vocabulary and a new set of tools for something that could easily have been accomplished via hypertext. ....Roy
On May 7, 2007, at 12:16 PM, Steve Bjorg wrote: >>> Most >>> discussions on this list revolve around how to use http methods, >>> status codes, and headers to map REST concepts into HTTP. While REST >>> might be conceptually larger, it is bounded by its HTTP heritage. >> >> Bounded? No way. It isn't even 1/3rd of the style. > > Can you cite an example of what you mean? Representational State Transfer. HTTP is only the transfer part. URIs, media types, and hypertext as the engine of application state have only incidental connections with HTTP (HTTP was created to do that stuff more efficiently than the original attempts to do so using FTP and Gopher, just as waka will do so more efficiently than HTTP). > Ok, so here is my conundrum: I was invited to give a short > presentation on REST following a presentation on SOAP (talk about > the one-eyed guiding the blind! As a side point, I did refer them > to you first :) ). Since I only have 20 minutes to convey of a few > key points, I can either focus on the underlying richness of HTTP > and how it can be tapped by applications (I think it would be a > great service to web-service authors to introduce them to the magic > of caching and simple redirects) or, if I stay true to the topic, > talk about application architecture and distribution of state and > stateless protocols. Either topic would probably be enlightening > to many, but it's undeniably clear that the latter is what I need > to convey to stay on topic. The style can be explained in 5 minutes, with the remaining time spent on your HTTP examples. Conference organizers are always going to be clueless, for the same reason that the SOAP enthusiasts were clueless when dealing with the comparison. They aren't interested in facts. Alternatively, just start your talk with "I am going to talk about web architecture as it exists today, not REST" and you will be fine. > Is there a key message you focus on as the key take away for your > talks on REST? Engineer for serendipity. ....Roy
On 7-May-07, at 10:04 AM, Jon Hanna wrote: > Bill de hOra wrote: > > I recall Mark Baker and James Strachan talking about paradigms and > > mental gear shifts a few years back when it comes to 'getting' > REST. I > > mean seriously, what is there to get? How can an *entire > industry* not > > get REST until Q406 or thereabouts? I don't buy it. I think the > wheels > > are falling off the WS industry wagon, and SOA will be next. We're > > witnessing one of those once a decade industry re-alignments. > > I often think that REST can be difficult for people to understand > because it's so simple. > ... we need something with an actual name > before they can even dare to belief in it. Similarly, in other aspects > of the web we have "Web2.0" which, as far as I can see, is the radical > notion that the technology we've all been using for over 15 years > might > actually work and maybe we should just use it rather than trying to > win > the glory of being the person who fixes it. Ah very well put sir. --Toby
Alan Dean wrote: > > > On 5/7/07, Eric Busboom <eric@... > <mailto:eric%40clarinova.com>> wrote: >> >> Hi, >> >> I've got a resource that acts like a map, with the path elements >> below the resource being keys to the map. So a request like: >> >> GET /foo/map/key >> >> would return the value associated with the key "key" in the map. I'd >> also like to be able to get the numbers of keys in the map, so it >> would be nice to use: >> >> GET /foo/map#size > > You could drive it with content-negotiation: > > GET /foo/map > Accept: application/x-map-size > > This won't work for a browser UAs, so an alternative is: > > GET /foo/map.size > > Alan Dean > http://thoughtpad.net/alan-dean <http://thoughtpad.net/alan-dean> > The use of content negotiation and representations raises an interesting question - Is metadata about a resource an alternative representation of a resource, or is it a different resource ? I can see lots of utility is having an easy way to get descriptions of a resource, by requesting a familiar mime-type, but this technique doesn't seem to be widely used. I'm wondering if there are deeper issues that prevent adoption of this... mike
Mike Pittaro <mikeyp@...> writes: > The use of content negotiation and representations raises an interesting question - > Is metadata about a resource an alternative representation of a resource, or is it > a different resource ? It's metadata about the resource. -- Nic Ferrier http://www.tapsellferrier.co.uk
On Tue, 2007-05-08 at 09:21 +0100, Steve Loughran wrote:
> theory one: the sapir-whorf hypothesis in action.
I'd agree with this, to a large extent ... we're starting to see a
shift, though, in the available syntax for "common" langauges, for
representing things other than synchronous-procedure-call.
> theory two: developers dont want to do distributed computing.
Corollary: much of the time developers don't need to do distributed
computing.
In a local network, with homogeneous machines, when you control
everything, many of the distributed problems are very attenuated. And
you can simulate asynchronous operations with synchronous ones with a
queue + thread on the other side. It can be hard to justify using an
otherwise "foreign" approach, especially one that's not well supported
by the languages/technologies in use (back to theory one).
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
On 5/8/07, Mike Pittaro <mikeyp@...> wrote: > > The use of content negotiation and representations raises an interesting question - > Is metadata about a resource an alternative representation of a resource, or is it > a different resource ? > > I can see lots of utility is having an easy way to get descriptions of a resource, > by requesting a familiar mime-type, but this technique doesn't seem to be widely used. > I'm wondering if there are deeper issues that prevent adoption of this... You are certainly correct - it is not often used. Part of the reason why content negotiation is difficult is because the HTTP spec does not do a particularly good job of supporting it. Another big difficulty is that there is next-to-no support for it in today's browsers either. However, it is a significant part of my own 'REST conformant' framework (I'm avoiding the term RESTful in the hopes of not bringing down Roy's ire upon me) and it does help solve a certain class of problem very efficiently. In particular, it is very useful when the UA is *not* a browser. This is often the case in an enterprise environment where you are exposing services for integration purposes, i.e. I am using it instead of WS-* in the enterprise. I am willing to be corrected, but I get the sense that many on this list are targeting browsers over the web - a quite different proposition. For me, the critical aspect of a content-negotiation implementation is to have a representation format that is the superset of all information about the resource (both data and metadata). I typically use RDF for this purpose. Then, if an alternate representation format is requested, the task is simply to cut down the data to the subset required for the alternate representation. Regards, Alan Dean http://thoughtpad.net/alan-dean
Alan Dean wrote: > On 5/8/07, Mike Pittaro <mikeyp@...> wrote: >> >> The use of content negotiation and representations raises an >> interesting question - >> Is metadata about a resource an alternative representation of a >> resource, or is it >> a different resource ? >> >> I can see lots of utility is having an easy way to get descriptions of >> a resource, >> by requesting a familiar mime-type, but this technique doesn't seem to >> be widely used. >> I'm wondering if there are deeper issues that prevent adoption of this... > > You are certainly correct - it is not often used. > > Part of the reason why content negotiation is difficult is because the > HTTP spec does not do a particularly good job of supporting it. > Another big difficulty is that there is next-to-no support for it in > today's browsers either. > > However, it is a significant part of my own 'REST conformant' > framework (I'm avoiding the term RESTful in the hopes of not bringing > down Roy's ire upon me) and it does help solve a certain class of > problem very efficiently. In particular, it is very useful when the UA > is *not* a browser. This is often the case in an enterprise > environment where you are exposing services for integration purposes, > i.e. I am using it instead of WS-* in the enterprise. I am willing to > be corrected, but I get the sense that many on this list are targeting > browsers over the web - a quite different proposition. > Services was definitely the context I was thinking about, particularly data services. Rod Smith of IBM hosted a small enterprise mashup summit in San Francisco yesterday. One of the (inevitable) topics that came up was the use of REST versus WS-*/SOAP data services. I was surprised to hear that in general, companies prefer the simplicity of REST and use it widely, but consider SOAP as more 'reusable.' In this particular context, 'reusable' actually meant having a readily available description of the service, something that REST services are perceived to be lacking. ARK was also mentioned during the day, the first time I heard of it. (Norman's post this morning was the second.) > For me, the critical aspect of a content-negotiation implementation is > to have a representation format that is the superset of all > information about the resource (both data and metadata). I typically > use RDF for this purpose. Then, if an alternate representation format > is requested, the task is simply to cut down the data to the subset > required for the alternate representation. This seems like a good approach. Most of the data cases I'm working with seem to fall into a model where the metadata is an extension of the data, although that may not always be true. > > Regards, > Alan Dean > http://thoughtpad.net/alan-dean -- mikeyp@... http://www.snaplogic.org
Roy T. Fielding wrote:
>
>
> On May 7, 2007, at 11:33 AM, Eric Busboom wrote:
> > Are the problems with WebDAV's versioning and ACL related to the
> > additional methods? How?
>
> The core problem is complexity. They treat every new object as
> a new data type with a new set of methods to manipulate it.
> ...
Well, for RFC3253 (Versioning) that's only partly true. RFC3253 exposes
all objects as separate resources (such as versions and version
histories). There's only one exception (the version-tree report that was
added for server implementors who claimed they couldn't implement
version histories as proper resources).
Guess what, a similar mistake is currently being made in JSR-283
("simple versioning").
Best regards, Julian
On Tuesday, May 08, 2007, at 09:45PM, "Julian Reschke" <julian.reschke@...> wrote:
>Roy T. Fielding wrote:
>>
>>
>> On May 7, 2007, at 11:33 AM, Eric Busboom wrote:
>> > Are the problems with WebDAV's versioning and ACL related to the
>> > additional methods? How?
>>
>> The core problem is complexity. They treat every new object as
>> a new data type with a new set of methods to manipulate it.
> > ...
>
>Well, for RFC3253 (Versioning) that's only partly true. RFC3253 exposes
>all objects as separate resources (such as versions and version
>histories).
I think that Roy refers to properties not being resources in the WebDAV model
but that they are accessed and manipulated via PROPxxx.
There are more resources than there are versions.
Jan
There's only one exception (the version-tree report that was
>added for server implementors who claimed they couldn't implement
>version histories as proper resources).
>
>Guess what, a similar mistake is currently being made in JSR-283
>("simple versioning").
>
>Best regards, Julian
>
>
>
>Yahoo! Groups Links
>
>
>
>
>
Eric Busboom wrote: > > > Hi, > > I've got a resource that acts like a map, with the path elements > below the resource being keys to the map. GET /foo/map?about send back metadata about the resource, as much as you need, then declare victory. Atom is a suitable candidate format; in the future when you have lots of dictionaries you can use APP to manage them. I'd like to know what's in that map, or why you want to expose a container explicitly. Fwiw, collections in HTTP are tricky; the best working solution for resource lists are syndication formats like RSS and Atom. If you want to stay the URI route, I would have thought a query parameter was a candidate for a dictionary lookup, on the basis that it seems clients are expected to know what to append onto the URL as the key - and query params are at least explicit in terms of design. cheers Bill
Roy T. Fielding wrote: > On May 7, 2007, at 11:33 AM, Eric Busboom wrote: >> Are the problems with WebDAV's versioning and ACL related to the >> additional methods? How? > > The core problem is complexity. They treat every new object as > a new data type with a new set of methods to manipulate it. > The REST way of doing it would be to map the data into resources > and provide links between them, thereby allowing all of the existing > methods to apply where needed for retrieval, updates, etc. Anyone wanting to add a method to HTTP should have to donate a kidney. That way people would think very carefully about each method, and never create more than two.
> The use of content negotiation and representations raises an > interesting question - Is metadata about a resource an > alternative representation of a resource, or is it a > different resource ? Yes.
On 5/8/07, Mike Pittaro <mikeyp@...> wrote: > > The use of content negotiation and representations raises an interesting question - > Is metadata about a resource an alternative representation of a resource, or is it > a different resource ? For me: Data + Metadata = Resource Alan Dean http://thoughtpad.net/alan-dean
On 5/8/07, Josh Sled <jsled@...> wrote: > On Tue, 2007-05-08 at 09:21 +0100, Steve Loughran wrote: > > theory one: the sapir-whorf hypothesis in action. > > I'd agree with this, to a large extent ... we're starting to see a > shift, though, in the available syntax for "common" langauges, for > representing things other than synchronous-procedure-call. That and maybe better support for trees and graphs of data. > > > > theory two: developers dont want to do distributed computing. > > Corollary: much of the time developers don't need to do distributed > computing. > > In a local network, with homogeneous machines, when you control > everything, many of the distributed problems are very attenuated. And > you can simulate asynchronous operations with synchronous ones with a > queue + thread on the other side. It can be hard to justify using an > otherwise "foreign" approach, especially one that's not well supported > by the languages/technologies in use (back to theory one) Every web page you serve up with an <a> link is a distributed app, you are telling the client to create a button, that, when the user clicks on it, triggers a download and a render of something, an action that may trigger server-side behaviour. It just doesnt look like classic distributed computing, because its so simple(*) -steve (*) and because the fault recovery is delegated to the user.
Jan Algermissen wrote: > > > > On Tuesday, May 08, 2007, at 09:45PM, "Julian Reschke" <julian.reschke@ > gmx.de <mailto:julian.reschke%40gmx.de>> wrote: > >Roy T. Fielding wrote: > >> > >> > >> On May 7, 2007, at 11:33 AM, Eric Busboom wrote: > >> > Are the problems with WebDAV's versioning and ACL related to the > >> > additional methods? How? > >> > >> The core problem is complexity. They treat every new object as > >> a new data type with a new set of methods to manipulate it. > > > ... > > > >Well, for RFC3253 (Versioning) that's only partly true. RFC3253 exposes > >all objects as separate resources (such as versions and version > >histories). > > I think that Roy refers to properties not being resources in the WebDAV > model > but that they are accessed and manipulated via PROPxxx. That's true, but applies to WebDAV in general, not RFC3253 (Versioning) or RFC3744 (ACL). Introducing WebDAV properties as second-class citizens may have been a bad idea, but that happened long before. > There are more resources than there are versions. Best regards, Julian
On May 9, 2007, at 1:02 AM, Julian Reschke wrote: >> I think that Roy refers to properties not being resources in the >> WebDAV model >> but that they are accessed and manipulated via PROPxxx. > > That's true, but applies to WebDAV in general, not RFC3253 > (Versioning) or RFC3744 (ACL). Introducing WebDAV properties as > second-class citizens may have been a bad idea, but that happened > long before. Yes, but that is what the thread was about -- the gates were opened and the negative effects are multiplied by each new extension (as opposed to the resource only model, wherein the complexity of each new extension is linear). ....Roy
On Tue, 2007-05-08 at 07:37 -0700, Mike Pittaro wrote:
> The use of content negotiation and representations raises an
> interesting question - Is metadata about a resource an alternative
> representation of a resource, or is it a different resource ?
I tend to think that "meta-data" is a bit of a red herring. If you have
an image, and there's data about that image (dims, format, &c.), but the
application traffics in that image-descriptive-data, it's not really
data and meta-data ... it's all just data.
I'd argue that the only valid meta-data about an HTTP application is
that at the HTTP protocol level itself. That – and in that context – is
really data about the data.
Calling stuff "meta" is a lot cooler, though.
But, yeah, it's usually best if every separate resource is a separate
resource, even if it's intrinsically related to another resource.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
Yet another alternative is to avoid recepie upload and supply is as a test argument (essentially, skip the very first POST). That would work well if the recepie is not too big. --- In rest-discuss@yahoogroups.com, Stian Soiland <ssoiland@...> wrote: > > > On 2 May 2007, at 09:56, rogervdkimmenade wrote: > > > Within the TestReport the URI of the corresponding TestRecipe is put. > > The test results can be viewed by: > > GET http://Roger/Tests/TestReport/TestReport12 > > (BTW, I don't really fancy the CamelCase style URLs, but I guess > that's not as important.. and do we need to repeat 'Test' everywhere > when it's under /Test ?) > > > What about including the test run as a real resource? After all it > has state (running, finished, etc) and references other resources, > such as the recipe and the report. > > > (I'm shortening the URLs and content): > > POST /TestRecipe > <testrecipe> .. > > 201 Created > Location: /TestRecipe/13 > > > POST /TestRun > <testrun> > <testrecipe xlink:href="/TestRecipe/13" /> > </> > > 201 Created > Location: /TestRun/192 > > > GET /TestRun/192 > <testrun> > <testrecipe xlink:href="/TestRecipe/13" /> > <status>Running</status> > .. (Could include data such as when it was started etc) > > > GET /TestRun/192 > <testrun> > <testrecipe xlink:href="/TestRecipe/13" /> > <status>Finished</status> > <result>Failed</result> > <report xlink:href="/TestRun/192/report" /> > .. > > > GET /TestRun/192/report > <testreport> > .. > > > This means that if some other conditions have changed, you can post a > new test run using the old test recipe URI. > > -- > Stian Soiland, myGrid team > School of Computer Science > The University of Manchester > http://www.cs.man.ac.uk/~ssoiland/ >
Thanks to everyone for your great feedback. It helped a lot! The final slides are uploaded here: http://doc.opengarden.org/REST/Introduction_to_REST Cheers, - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Hi,
I've noticed that many questions about resource names and presentations come up repeatedly:
* handling collections
* queries
* fragments
* pagination
Is there any thought on creating some standard for these issues?
Thanks,
Ittay
[ Attachment content not displayed ]
On 9 May 2007, at 20:39, hovhannes_tumanyan wrote: > Yet another alternative is to avoid recepie upload and supply is as a > test argument (essentially, skip the very first POST). > That would work well if the recepie is not too big. It's probably very clever to support both. My idea was that perhaps the client wanted to reuse the recipe to running it again in the second test. -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
On 10 May 2007, at 10:43, Benot Fleury wrote: > * authentication (HTTP Authentication, Google Authentication, OpenID) Is there a good way to use OpenID from a REST application with programmatic clients (what do we call these?) - without pretending to be a browser? -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
Hi, I have posted to the html5 WG mail list asking for PUT and DELETE to be included as acceptable <form> method values, to avoid needing to use XmlHttpRequest from browser UAs (and for more graceful functional degradation in the face of disabled JavaScript). See http://lists.w3.org/Archives/Public/public-html/2007May/0917.html Others on this list may wish to express their opinions on the matter. Regards, Alan Dean http://thoughtpad.net/alan-dean
On May 9, 2007, at 12:54 PM, Steve Bjorg wrote: > Thanks to everyone for your great feedback. It helped a lot! > > The final slides are uploaded here: > http://doc.opengarden.org/REST/Introduction_to_REST It is missing one slide. I'd point out which one, but I am curious if anyone else can figure out what is missing from the description. I am especially curious because almost every presentation that tries to describe REST leaves the same bits out, and I am wondering if it is because I explained it poorly in my dissertation or if it is simply hard to understand why it is essential to the style. Otherwise, it is a reasonable talk given the time limitations. I would spend more time on the "why" parts, but that's just me, and I'll be doing my own presentation at Jazoon07 anyway. BTW, I don't understand the triangle diagram. I have absolutely no idea what that has to do with REST, or why it appears on the wikipedia entry as well. YMMV. ....Roy
On Thu, 2007-05-10 at 05:53 -0700, Roy T. Fielding wrote:
> On May 9, 2007, at 12:54 PM, Steve Bjorg wrote:
>
> > Thanks to everyone for your great feedback. It helped a lot!
> >
> > The final slides are uploaded here:
> > http://doc.opengarden.org/REST/Introduction_to_REST
>
> It is missing one slide. I'd point out which one, but I am curious
> if anyone else can figure out what is missing from the description.
What is "Hypermedia is the engine of application state"?
> I am especially curious because almost every presentation that
> tries to describe REST leaves the same bits out, and I am wondering
> if it is because I explained it poorly in my dissertation or if it
> is simply hard to understand why it is essential to the style.
It's hard to apply in the machine-to-machine, replacement-for-RPC case,
which is where many people really want to use REST. If the client isn't
a user-agent, but is instead a knowledgeable actor in the domain, it can
be out of place for the server to tell it what links to traverse, and
what forms look like, rather than the client just constructing the links
and building the "form-response" from out-of-band knowledge. It's extra
overhead. While essential in the large-scale, evolutionary-web
situation, it's wasted on the smaller-scale "I know that I want to make
a version 1.1 'POST shoppingCartItemAddition' request of
ShoppingCartFormat 2.7".
As for the slides, I'd not label it a "design pattern". As I
understand, architecture sets up the constraints of a solution space in
a context, and design expresses a more specific class of technical
solution within those constraints.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
On 5/10/07, Roy T. Fielding <fielding@...> wrote: > On May 9, 2007, at 12:54 PM, Steve Bjorg wrote: > > > Thanks to everyone for your great feedback. It helped a lot! > > > > The final slides are uploaded here: > > http://doc.opengarden.org/REST/Introduction_to_REST > > It is missing one slide. I'd point out which one, but I am curious > if anyone else can figure out what is missing from the description. Hypermedia? Mark.
> While essential in the large-scale, evolutionary-web
> situation, it's wasted on the smaller-scale "I know that I want to make
> a version 1.1 'POST shoppingCartItemAddition' request of
> ShoppingCartFormat 2.7".
But if you have hypermedia to describe how to construct such a
request, it becomes far easier for an outside developer to integrate
with.
On 5/10/07, Josh Sled <jsled@...> wrote:
> On Thu, 2007-05-10 at 05:53 -0700, Roy T. Fielding wrote:
> > On May 9, 2007, at 12:54 PM, Steve Bjorg wrote:
> >
> > > Thanks to everyone for your great feedback. It helped a lot!
> > >
> > > The final slides are uploaded here:
> > > http://doc.opengarden.org/REST/Introduction_to_REST
> >
> > It is missing one slide. I'd point out which one, but I am curious
> > if anyone else can figure out what is missing from the description.
>
> What is "Hypermedia is the engine of application state"?
>
>
> > I am especially curious because almost every presentation that
> > tries to describe REST leaves the same bits out, and I am wondering
> > if it is because I explained it poorly in my dissertation or if it
> > is simply hard to understand why it is essential to the style.
>
> It's hard to apply in the machine-to-machine, replacement-for-RPC case,
> which is where many people really want to use REST. If the client isn't
> a user-agent, but is instead a knowledgeable actor in the domain, it can
> be out of place for the server to tell it what links to traverse, and
> what forms look like, rather than the client just constructing the links
> and building the "form-response" from out-of-band knowledge. It's extra
> overhead. While essential in the large-scale, evolutionary-web
> situation, it's wasted on the smaller-scale "I know that I want to make
> a version 1.1 'POST shoppingCartItemAddition' request of
> ShoppingCartFormat 2.7".
>
>
> As for the slides, I'd not label it a "design pattern". As I
> understand, architecture sets up the constraints of a solution space in
> a context, and design expresses a more specific class of technical
> solution within those constraints.
>
> --
> ...jsled
> http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
>
>
Benoit Fleury wrote: > > Hi, > > I agree with you. It would be interesting to retrieve/organize "REST design > patterns" into solve common problems. > > Some pointers about your common questions. > * handling collections : APP and the Atom format manage well collections > * queries : OpenSearch (+ a need for POST queries in my opinion) > * fragments : > * pagination : OpenSearch, APP > > Other common questions : > * edition conflicts management (GData) ; > * authentication (HTTP Authentication, Google Authentication, OpenID) > I've been documenting some of the usual patterns on our open-source community wiki at: http://doc.opengarden.org/REST/REST_Patterns It's work in progress (as a wiki always is), but such a resource is truly needed. Most people understand much better by example, then by principle, and they need to see the basic building blocks they can use in their applications. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
On May 10, 2007, at 5:53 AM, Roy T. Fielding wrote: > On May 9, 2007, at 12:54 PM, Steve Bjorg wrote: > >> Thanks to everyone for your great feedback. It helped a lot! >> >> The final slides are uploaded here: >> http://doc.opengarden.org/REST/Introduction_to_REST > > It is missing one slide. I'd point out which one, but I am curious > if anyone else can figure out what is missing from the description. I sure would be curious to know! :) > (snip) > > Otherwise, it is a reasonable talk given the time limitations. > I would spend more time on the "why" parts, but that's just me, > and I'll be doing my own presentation at Jazoon07 anyway. Can you share your the slides after your talk? > BTW, I don't understand the triangle diagram. I have absolutely > no idea what that has to do with REST, or why it appears on the > wikipedia entry as well. YMMV. It's as an effective visual mnemonic to tie everything together. There is no meaning to the triangle. If there were more principles, it could be an octagon. :) - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
My understanding is that HTML5 references Web Forms 2.0 for form submission, and WF2.0 already has support for PUT and DELETE: http://www.whatwg.org/specs/web-forms/current-work/#for-http Pete > I have posted to the html5 WG mail list asking for PUT and DELETE to be > included as acceptable <form> method values, to avoid needing to use > XmlHttpRequest from browser UAs (and for more graceful functional > degradation in the face of disabled JavaScript).
On 5/10/07, Peter Lacey <placey@...> wrote: > My understanding is that HTML5 references Web Forms 2.0 for form > submission, and WF2.0 already has support for PUT and DELETE: > http://www.whatwg.org/specs/web-forms/current-work/#for-http You are correct - I had not seen that. Lachlan Hunt put me right: http://lists.w3.org/Archives/Public/public-html/2007May/0922.html Alan
Stian Soiland wrote: > On 10 May 2007, at 10:43, Beno�t Fleury wrote: > > >> * authentication (HTTP Authentication, Google Authentication, OpenID) >> > > Is there a good way to use OpenID from a REST application with > programmatic clients (what do we call these?) - without pretending to > be a browser? > > This is a topic of current discussion over on the openid-general mailing list. I'm also looking at how to do REST-oriented authentication/authorization right now using AOL's OpenAuth service for an Atom API. OpenAuth seems a bit easier to bite off to start with. All solutions however seem to involve somebody pretending to be a browser at some point in the process. We might be able to get rid of cookies at least. John
Steve said: > Thanks to everyone for your great feedback. It helped a lot! > > The final slides are uploaded here: > http://doc.opengarden.org/REST/Introduction_to_REST Might be worth reading the following: http://www.presentationzen.com/presentationzen/2007/05/the_source_of_a.html ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
On May 10, 2007, at 6:37 AM, Josh Sled wrote: > On Thu, 2007-05-10 at 05:53 -0700, Roy T. Fielding wrote: >> On May 9, 2007, at 12:54 PM, Steve Bjorg wrote: >> >>> Thanks to everyone for your great feedback. It helped a lot! >>> >>> The final slides are uploaded here: >>> http://doc.opengarden.org/REST/Introduction_to_REST >> >> It is missing one slide. I'd point out which one, but I am curious >> if anyone else can figure out what is missing from the description. > > What is "Hypermedia is the engine of application state"? Yep. >> I am especially curious because almost every presentation that >> tries to describe REST leaves the same bits out, and I am wondering >> if it is because I explained it poorly in my dissertation or if it >> is simply hard to understand why it is essential to the style. > > It's hard to apply in the machine-to-machine, replacement-for-RPC > case, > which is where many people really want to use REST. If the client > isn't > a user-agent, but is instead a knowledgeable actor in the domain, > it can > be out of place for the server to tell it what links to traverse, and > what forms look like, rather than the client just constructing the > links > and building the "form-response" from out-of-band knowledge. It's > extra > overhead. While essential in the large-scale, evolutionary-web > situation, it's wasted on the smaller-scale "I know that I want to > make > a version 1.1 'POST shoppingCartItemAddition' request of > ShoppingCartFormat 2.7". Umm, no, it is essential to eliminate the coupling between client and server. If the application doesn't follow the workflow defined by the representations that are received, then the application isn't using the REST style. Not even a little bit. It is using RPC plus streaming, with a rather inefficient syntax, and the client will break each time the server's application evolves because the client must be anticipating the server's state based on its own assumptions. In other words, the two are coupled by their original design. REST simplifies applications because it rips apart the million potential states inherent in any serious application and presents to the client only one at a time, with every single transition from that state described in a format that can be understood by the client as a potential transition. The client is completely decoupled from the server aside from the shared agreement on what each media type means. The entire application only needs to be understood (and can be completely tested) one state at a time. The only difference between machine-to-machine interaction and human-browser interaction is the choice of media types and the degree to which the potential transitions are described by those types. A browser knows the difference between an anchor and an in-line image because the media type standard defines that difference. It doesn't have to ask the user each time whether they want a given relationship to be treated as an in-line image, stylesheet, javascript, atom subscription, or any of the other relationships that are automated with even browser-based hypermedia. A pure machine-to-machine simply automates all transitions based on some predefined (or adaptive) criteria that is evaluated for each representation received. Hypermedia means the placement of controls within the presentation of information -- it is not just a GUI paradigm. > As for the slides, I'd not label it a "design pattern". As I > understand, architecture sets up the constraints of a solution > space in > a context, and design expresses a more specific class of technical > solution within those constraints. Pattern is, unfortunately, an overused term. There are some architectural patterns which are essentially the same as styles, but most of the design patterns are simply language idioms. The software research folks call the former styles, whereas the OOPL research folks chose to call just about everything a design pattern. The funny thing is that Christopher Alexander's work defined patterns based on common living patterns (more like our view of software data flows over time), not recipes for builders, so OOPL design patterns have always been a bit of an oddity. They are both important, but architecture should be about how the system works when it is running, not how to structure code. ....Roy
"Andrzej Jan Taramina" <andrzej@...> wrote: > > Steve said: > > > Thanks to everyone for your great feedback. It helped a lot! > > > > The final slides are uploaded here: > > http://doc.opengarden.org/REST/Introduction_to_REST > > Might be worth reading the following: > > http://www.presentationzen.com/presentationzen/2007/05/the_source_of_a.html > > ;-) > > > Andrzej Jan Taramina > Chaeron Corporation: Enterprise System Solutions > http://www.chaeron.com > Hehe. Point taken! :) - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
On Thu, 2007-05-10 at 08:58 -0700, Roy T. Fielding wrote:
> Umm, no, it is essential to eliminate the coupling between client
> and server. If the application doesn't follow the workflow defined
> by the representations that are received, then the application isn't
> using the REST style. Not even a little bit. It is using RPC plus
> streaming, with a rather inefficient syntax, and the client will
> break each time the server's application evolves because the client
> must be anticipating the server's state based on its own assumptions.
> In other words, the two are coupled by their original design.
I should have been as clear. I don't disagree, at all. I think what a
lot of people are calling "REST" right now – and part of the negative
response it encounters – is exactly because of this mis-use and
mis-understanding.
E.g. http://developer.yahoo.com/photos/V3.0/createAlbum.html
Regardless, it's still harder to apply – in both REST and "REST" – than
simply coding to a specific version/workflow/API, which is all that most
people care to do. Laziness wins. :/
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
i'm having a hard time understanding your explanation. can you please post two examples, one "RPC plus streaming", the other true REST?
i assume you're not referring as 'RPC' to the obvious http://www.example.org/addToShoppingCart
thanks,
ittay
Roy T. Fielding wrote
on 05/10/07 18:58:
On May 10, 2007, at 6:37 AM, Josh Sled wrote:
> On Thu, 2007-05-10 at 05:53 -0700, Roy T. Fielding wrote:
>> On May 9, 2007, at 12:54 PM, Steve Bjorg wrote:
>>
>>> Thanks to everyone for your great feedback. It helped a lot!
>>>
>>> The final slides are uploaded here:
>>> http://doc.opengarden.org/REST/ Introduction_ to_REST
>>
>> It is missing one slide. I'd point out which one, but I am curious
>> if anyone else can figure out what is missing from the description.
>
> What is "Hypermedia is the engine of application state"?
Yep.
>> I am especially curious because almost every presentation that
>> tries to describe REST leaves the same bits out, and I am wondering
>> if it is because I explained it poorly in my dissertation or if it
>> is simply hard to understand why it is essential to the style.
>
> It's hard to apply in the machine-to-machine, replacement-for-RPC
> case,
> which is where many people really want to use REST. If the client
> isn't
> a user-agent, but is instead a knowledgeable actor in the domain,
> it can
> be out of place for the server to tell it what links to traverse, and
> what forms look like, rather than the client just constructing the
> links
> and building the "form-response" from out-of-band knowledge. It's
> extra
> overhead. While essential in the large-scale, evolutionary-web
> situation, it's wasted on the smaller-scale "I know that I want to
> make
> a version 1.1 'POST shoppingCartItemAddition' request of
> ShoppingCartFormat 2.7".
Umm, no, it is essential to eliminate the coupling between client
and server. If the application doesn't follow the workflow defined
by the representations that are received, then the application isn't
using the REST style. Not even a little bit. It is using RPC plus
streaming, with a rather inefficient syntax, and the client will
break each time the server's application evolves because the client
must be anticipating the server's state based on its own assumptions.
In other words, the two are coupled by their original design.
REST simplifies applications because it rips apart the million
potential states inherent in any serious application and presents
to the client only one at a time, with every single transition
from that state described in a format that can be understood by
the client as a potential transition. The client is completely
decoupled from the server aside from the shared agreement on what
each media type means. The entire application only needs to be
understood (and can be completely tested) one state at a time.
The only difference between machine-to-machine interaction and
human-browser interaction is the choice of media types and the
degree to which the potential transitions are described by those
types. A browser knows the difference between an anchor and an
in-line image because the media type standard defines that difference.
It doesn't have to ask the user each time whether they want a
given relationship to be treated as an in-line image, stylesheet,
javascript, atom subscription, or any of the other relationships
that are automated with even browser-based hypermedia. A pure
machine-to-machine simply automates all transitions based on
some predefined (or adaptive) criteria that is evaluated for
each representation received.
Hypermedia means the placement of controls within the presentation
of information -- it is not just a GUI paradigm.
> As for the slides, I'd not label it a "design pattern". As I
> understand, architecture sets up the constraints of a solution
> space in
> a context, and design expresses a more specific class of technical
> solution within those constraints.
Pattern is, unfortunately, an overused term. There are some
architectural patterns which are essentially the same as styles,
but most of the design patterns are simply language idioms. The
software research folks call the former styles, whereas the
OOPL research folks chose to call just about everything a design
pattern. The funny thing is that Christopher Alexander's work
defined patterns based on common living patterns (more like our
view of software data flows over time), not recipes for builders,
so OOPL design patterns have always been a bit of an oddity.
They are both important, but architecture should be about how
the system works when it is running, not how to structure code.
....Roy
On 5/10/07, Roy T. Fielding <fielding@...> wrote: > [. . .] > REST simplifies applications because it rips apart the million > potential states inherent in any serious application and presents > to the client only one at a time, with every single transition > from that state described in a format that can be understood by > the client as a potential transition. The client is completely > decoupled from the server aside from the shared agreement on what > each media type means. The entire application only needs to be > understood (and can be completely tested) one state at a time. > This is where I get a little confused. I assume that the client still needs to understand each of the possible states so that it knows what to do with/at each of those states. I also assume that the client needs to understand what each of the transitions (links) out of the current state means so that it can determine which one to follow. What happens when the application evolves to include a new state? How does the client figure out what to do with that new state? --Chuck > The only difference between machine-to-machine interaction and > human-browser interaction is the choice of media types and the > degree to which the potential transitions are described by those > types. A browser knows the difference between an anchor and an > in-line image because the media type standard defines that difference. > It doesn't have to ask the user each time whether they want a > given relationship to be treated as an in-line image, stylesheet, > javascript, atom subscription, or any of the other relationships > that are automated with even browser-based hypermedia. A pure > machine-to-machine simply automates all transitions based on > some predefined (or adaptive) criteria that is evaluated for > each representation received. > > Hypermedia means the placement of controls within the presentation > of information -- it is not just a GUI paradigm. >
Josh Sled wrote: > On Thu, 2007-05-10 at 05:53 -0700, Roy T. Fielding wrote: >> On May 9, 2007, at 12:54 PM, Steve Bjorg wrote: >> >>> Thanks to everyone for your great feedback. It helped a lot! >>> >>> The final slides are uploaded here: >>> http://doc.opengarden.org/REST/Introduction_to_REST >> It is missing one slide. I'd point out which one, but I am curious >> if anyone else can figure out what is missing from the description. > > What is "Hypermedia is the engine of application state"? And you can explain it like this: Have URI, will follow. K. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
* Keith Gaughan <keith@...> [2007-05-11 12:15]: > And you can explain it like this: Have URI, will follow. That is a little too trivial. Maybe like this: Have representation with URIs, will follow. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Haha, I thought that title would get some attention. For the most part I get the whole resources thing, but I'm having trouble coming up with a restful way of solving a problem. I'm making an app that allows users to make graphs of ecological data from different study sites. In my first case, I want to allow a user to get a plot of change in abundance of a particular species over time at one study site. My url looks like this: study_sites/:site_id/abundances/:species_id.png I could also specify .xml and get the raw data. Pretty good way of representing this. Ok, now imagine that I want to allow users to create a plot comparing multiple species and multiple sites! As an RPC call, it would look something like this: /plotAbundance?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC Basically, its an RPC call that accepts and arbitrary number of sites and species. I really can't imagine a scheme for doing this with rest. The only thing I could think of is POST+redirect, and cache the image that it is redirected to. Unfortunately, you can't really encode a POST into an img tag. I would have to do some ajax calls and insert a link to the redirect location. I was hoping to find a REST design patterns reference somewhere but no luck so far. Could somebody take a crack at this? Thanks, Chad
On Fri, 2007-05-11 at 15:21 +0000, under.bluewaters wrote:
> Ok, now imagine that I want to allow users to create a plot comparing
> multiple species and multiple sites! As an RPC call, it would look
> something like this:
>
> /plotAbundance?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC
What if you change 'plot' from a verb to a noun?
Then it becomes a parameterized resource instead of a procedure call.
If you really care to remove the query-part, maybe something like
</plot/places/HAZARDS,ANACAPA/species/PCLA,CNIC>.
> Basically, its an RPC call that accepts and arbitrary number of sites
> and species. I really can't imagine a scheme for doing this with rest.
> The only thing I could think of is POST+redirect, and cache the image
> that it is redirected to. Unfortunately, you can't really encode a
> POST into an img tag. I would have to do some ajax calls and insert a
> link to the redirect location.
Or, you could POST a form with all the parameters/values, creating a new
– perhaps transient – /plot/{id} resource.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
under.bluewaters wrote: > Ok, now imagine that I want to allow users to create a plot comparing > multiple species and multiple sites! As an RPC call, it would look > something like this: > > /plotAbundance?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC > > Basically, its an RPC call that accepts and arbitrary number of sites > and species. I really can't imagine a scheme for doing this with rest. > The only thing I could think of is POST+redirect, and cache the image > that it is redirected to. Unfortunately, you can't really encode a > POST into an img tag. I would have to do some ajax calls and insert a > link to the redirect location. What is particularly RPC (or non-REST) about the above URL? It looks perfectly ordinary to me.
I've been experimenting with building visual REST applications I used frevvo <http://www.frevvo.com> and Restlet <http://www.restlet.org> (I work for frevvo). I've posted a couple of blog articles about my experiences and I was wondering how other people are building visual web applications with REST. You can find much more including diagrams, examples and source code at: Visual REST apps: Part 1 <http://www.frevvo.com/blog/?p=26> Visual REST apps: Part 2 <http://www.frevvo.com/blog/?p=32> I tend to think in terms of View Resource (forms) that compose Entity Resources (documents). The View Resource can update existing entities, create new ones, store locally or update remotely all in a uniform RESTful manner. IMO, it's a very powerful and flexible way to interact with the web and create REST applications that handle complex visual elements without compromising REST principles.I'd be very interested in your thoughts. Thx, -Ashish
Chris Burdess wrote: > under.bluewaters wrote: > >> Ok, now imagine that I want to allow users to create a plot comparing >> multiple species and multiple sites! As an RPC call, it would look >> something like this: >> >> /plotAbundance?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC >> >> Basically, its an RPC call that accepts and arbitrary number of sites >> and species. I really can't imagine a scheme for doing this with rest. >> The only thing I could think of is POST+redirect, and cache the image >> that it is redirected to. Unfortunately, you can't really encode a >> POST into an img tag. I would have to do some ajax calls and insert a >> link to the redirect location. >> > > What is particularly RPC (or non-REST) about the above URL? It looks > perfectly ordinary to me. > > Make it a noun, and it becomes perfectly RESTful: /abundancePlot?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC :^) Actually seriously: Instead of viewing this as an operation, just view it as a query in a very very large query space. GET is perfect for this. -John
[ Attachment content not displayed ]
[ Attachment content not displayed ]
So we all agree, the verb is GET. But it sounds like your looking for a noun in a non- hierarchical space. Maybe this will help: <http://www.w3.org/DesignIssues/MatrixURIs.html> > The analogy with procedure call holds still when looking at combined forms: The > hierarchical part of the URL is paused first, and then the semi-colon separated qualifiers > are paused as indicating positions in some matrix. As an example let's imagine the URL > of an automatically generated map in which the parameters for latitude, longitude and > scale are given separately. Each may be named, and each if omitted may take a default. > So, for example, > > //moremaps.com/map/color;lat=50;long=20;scale=32000 or translating into your example: /speciesAbundancePlot;site1=HAZARDS;site2=ANACAPA;species1=PCLA;species2=CNIC -Ray --- In rest-discuss@yahoogroups.com, John Panzer <jpanzer@...> wrote: > > Chris Burdess wrote: > > under.bluewaters wrote: > > > >> > >> /plotAbundance?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC > >> > >> Basically, its an RPC call that accepts and arbitrary number of sites > >> and species. I really can't imagine a scheme for doing this with rest. > >> The only thing I could think of is POST+redirect, and cache the image > >> that it is redirected to. Unfortunately, you can't really encode a > >> POST into an img tag. I would have to do some ajax calls and insert a > >> link to the redirect location. > >> > > > > What is particularly RPC (or non-REST) about the above URL? It looks > > perfectly ordinary to me. > > > > > Make it a noun, and it becomes perfectly RESTful: > > /abundancePlot?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC > > :^) > > Actually seriously: Instead of viewing this as an operation, just view > it as a query in a very very large query space. GET is perfect for this. > > -John >
On Fri, 2007-05-11 at 16:39 +0100, Chris Burdess wrote:
> under.bluewaters wrote:
> > Ok, now imagine that I want to allow users to create a plot
> comparing
> > multiple species and multiple sites! As an RPC call, it would look
> > something like this:
> >
> > /plotAbundance?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC
> >
> > Basically, its an RPC call that accepts and arbitrary number of
> sites
> > and species. I really can't imagine a scheme for doing this with
> rest.
> > The only thing I could think of is POST+redirect, and cache the
> image
> > that it is redirected to. Unfortunately, you can't really encode a
> > POST into an img tag. I would have to do some ajax calls and insert
> a
> > link to the redirect location.
>
> What is particularly RPC (or non-REST) about the above URL? It looks
> perfectly ordinary to me.
It's on the cusp: it's Mark Baker's 'accidentally RESTful' .
http://www.markbaker.ca/blog//2005/04/14#2005-04-amazon-next
If you're seeing a function call - i.e., plotAbundance(HAZARDS, ANACAPA,
PCLA, CNIC) - then your RESTfulness will be a fragile thing! You may
assume it's OK to have side effects, you may start introducing
un-dereferenceable internal ids to 'private' data, etc, etc. We are
already going down that path: notice how the public external form:
/study_sites/ANACAPA/abundances/CNIC
doesn't appear in the suggested query. The following:
/abundancePairPlot?red=/study_sites/HAZARDS/abundances/PCLA&blue=/study_sites/ANACAPA/abundances/CNIC
..looks and feels better (declarative not imperative), and may encourage
the implementor to think about the cacheability, etc of this resource.
Pragmatically, removing the query approach (as Josh Sled suggested)
would also get it into more (proxy-)caches.
Another risk of function-call accidental RESTfulness is that the return
data won't have any more useful links in it ("no-one puts function calls
inside exported data!").
Having said all that, I would always go for POST-redirect, and find a
way to get the resultant cacheable, opaque URI into your img tag. Why do
you need URI transparency anyway in your img, in this example?
Duncan
>
[ Attachment content not displayed ]
[ Attachment content not displayed ]
On May 10, 2007, at 2:48 PM, Chuck Hinson wrote: > On 5/10/07, Roy T. Fielding <fielding@...> wrote: > > > [. . .] > > REST simplifies applications because it rips apart the million > > potential states inherent in any serious application and presents > > to the client only one at a time, with every single transition > > from that state described in a format that can be understood by > > the client as a potential transition. The client is completely > > decoupled from the server aside from the shared agreement on what > > each media type means. The entire application only needs to be > > understood (and can be completely tested) one state at a time. > > > This is where I get a little confused. > > I assume that the client still needs to understand each of the > possible > states so that it knows what to do with/at each of those states. I > also > assume that the client needs to understand what each of the > transitions > (links) out of the current state means so that it can determine > which one > to follow. Each state is described by the current set of representations. A client's understanding will depend on its understanding of the media types. > What happens when the application evolves to include a new > state? How does the client figure out what to do with that new state? The client doesn't need to memorize the states, other than the few cool URI "start states" that might be bookmarked. Each state tells it what to do next. The evolution a client has to worry about is when the media type evolves in unanticipated ways, which is the same problem as occurs when new HTML elements are added that a browser doesn't recognize. Perhaps a representation is provided such that the new bits can be safely ignored by old clients, or parallel states are introduced for different types of clients, or clients are automagically updated through code on demand, or clients use content negotiation on each request. Those are all typical mechanisms for backwards compatibility with clients. ....Roy
On 11/05/07, Roy T. Fielding <fielding@...> wrote: > > I assume that the client still needs to understand each of the > > possible > > states so that it knows what to do with/at each of those states. > Each state is described by the current set of representations. > A client's understanding will depend on its understanding of the > media types. > > What happens when the application evolves to include a new > > state? How does the client figure out what to do with that new state? > > The client doesn't need to memorize the states, other than the > few cool URI "start states" that might be bookmarked. Each state > tells it what to do next. So where (today) I might validate some input, and dependent on n state variables.... present A or B or C, now I simply generate links (for a human interaction) appropriate to the current state.. with all information presented via this representation to satisfy the readers informational need? I.e. Within each state, there's less to concern the application writer - which should be easier to design. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
On May 10, 2007, at 11:25 AM, Ittay Dror wrote: > i'm having a hard time understanding your explanation. can you > please post two examples, one "RPC plus streaming", the other true > REST? RPC is remote procedure call. One of the things that typical RPC mechanisms lack is the ability to describe responses (or even parameters) as a stream of data as opposed to a small data type. HTTP has no problem doing that even when it is used in an RPC way. So, people who advocate "resource-oriented" as an end in itself, as if the only thing you need to build a network-based application is a URI template and resource definition language, are just fooling themselves into thinking what they are doing is REST. Resources are just one part of the style. Resources are necessary for REST, but resources must be allowed to evolve independently from the clients and the only way that can happen is when the clients are expecting to be instructed by the next representation received. Otherwise, the client is making assumptions about the server's implementation --- assumptions that will break eventually --- and becomes much more complex and brittle than the type of network-based application that REST is intended to encourage. That is why hypermedia as the engine of application state is an essential constraint of REST, and why RESTful application frameworks cannot ignore the need for meaningful media types, whether it be in the form of microformats or specialized XML data types. ....Roy
--- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> wrote: > It is missing one slide. I'd point out which one, but I am curious > if anyone else can figure out what is missing from the description. I'll take a stab: A slide on State. As Tim Ewald put it in his recent personal revelation concerning REST: "The essence of REST is to make the states of the protocol explicit and addressible by URIs." What a lot of people miss in the discussion about the representation of resources is that the representation transferred must be a representation of the STATE of the resource. That's why it's called Representation (of) STATE Transfer. -- Nick
So if I understand correctly: take the case of the shopping cart. REST-like implementation will return a representation of the shopping cart as a list of catalog numbers of items. almost-REST implementation will return a list of URIs of the items. REST implementation will return a list of links where the link targets are the items.
Are these examples correct?
If so, wouldn't a client of a true REST implementation still need to understand the representation to know what links to follow: say a shopping cart's representation is paged, so each chunk has also a 'next' and 'prev' links. A client would need to know their semantics in order to use them correctly, right?
Also, what about when a
client doesn't want to just view (GET) the state, but also to update
it, how can he know how data should be sent to the server, just based
on representation?
Thanks,
Ittay
Roy T. Fielding wrote
on 05/11/07 21:21:
On May 10, 2007, at 11:25 AM, Ittay Dror wrote:
i'm having a hard time understanding your explanation. can you please post two examples, one "RPC plus streaming", the other true REST?
RPC is remote procedure call. One of the things that typical RPC
mechanisms lack is the ability to describe responses (or even
parameters) as a stream of data as opposed to a small data type.
HTTP has no problem doing that even when it is used in an RPC way.
So, people who advocate "resource-oriented" as an end in itself,
as if the only thing you need to build a network-based application
is a URI template and resource definition language, are just fooling
themselves into thinking what they are doing is REST. Resources
are just one part of the style. Resources are necessary for REST,
but resources must be allowed to evolve independently from the
clients and the only way that can happen is when the clients are
expecting to be instructed by the next representation received.
Otherwise, the client is making assumptions about the server's
implementation --- assumptions that will break eventually --- and
becomes much more complex and brittle than the type of network-based
application that REST is intended to encourage.
That is why hypermedia as the engine of application state is
an essential constraint of REST, and why RESTful application
frameworks cannot ignore the need for meaningful media types,
whether it be in the form of microformats or specialized XML
data types.
....Roy
Ittay Dror wrote: > > So if I understand correctly: take the case of the shopping cart. > REST-like implementation will return a representation of the shopping > cart as a list of catalog numbers of items. almost-REST implementation > will return a list of URIs of the items. REST implementation will > return a list of links where the link targets are the items. > > > Are these examples correct? > > > If so, wouldn't a client of a true REST implementation still need to > understand the representation to know what links to follow: say a > shopping cart's representation is paged, so each chunk has also a > 'next' and 'prev' links. A client would need to know their semantics > in order to use them correctly, right? > > > Also, what about when a client doesn't want to just view (GET) the > state, but also to update it, how can he know how data should be sent > to the server, just based on representation? > Based on the requirements above, I'd probably look at a shopping cart profile of the Atom format (application/atom+xml) and protocol (add/edit/delete items). YMMV. ...because it already implements these fairly generic features, is already specc'd, and there are libraries and people already dealing with it. Disclaimer: I'm an Atom-ite. -John
--- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> wrote: > > So, people who advocate "resource-oriented" as an end in itself, > as if the only thing you need to build a network-based application > is a URI template and resource definition language, are just fooling > themselves into thinking what they are doing is REST. Resources > are just one part of the style. Resources are necessary for REST, > but resources must be allowed to evolve independently from the > clients and the only way that can happen is when the clients are > expecting to be instructed by the next representation received. So, since the XML result returned by the Atom Publishing Protocol does not include hyperlinks with each entry for deleting that entry, or updating it, or inserting a new entry (or some equivalent way of representing the allowable state transitions), then APP is not fully RESTful; since it does not comply with the "Hypermedia as the Engine of Application State" (HEAS) constraint? I think I understand how REST works for UA2AA (User Agent to Automated Agent) protocols. Are there any examples of AA2AA (Automated Agent to Automated Agent) REST protocols in production that really do HEAS right? Tim Ewald's hypothetical example of a flight reservation protocol helped, but it's a toy example, not a real production example. -- Nick
Nick Gall wrote: > --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> > wrote: > >> So, people who advocate "resource-oriented" as an end in itself, >> as if the only thing you need to build a network-based application >> is a URI template and resource definition language, are just fooling >> themselves into thinking what they are doing is REST. Resources >> are just one part of the style. Resources are necessary for REST, >> but resources must be allowed to evolve independently from the >> clients and the only way that can happen is when the clients are >> expecting to be instructed by the next representation received. >> > > So, since the XML result returned by the Atom Publishing Protocol does > not include hyperlinks with each entry for deleting that entry, or > updating it, or inserting a new entry (or some equivalent way of > representing the allowable state transitions), then APP is not fully > RESTful; since it does not comply with the "Hypermedia as the Engine > of Application State" (HEAS) constraint? > APP defines <link rel="edit" href="..."/> in atom:entry elements for deleting/updating entries. The service document defines <collection href="..."/> for discovering collections (feeds) of entries, and collections are also where you POST new entries. Is there something missing? -John
Roy T. Fielding wrote: "So, people who advocate "resource-oriented" as an end in itself, as if the only thing you need to build a network-based application is a URI template and resource definition language, are just fooling themselves into thinking what they are doing is REST. Resources are just one part of the style." I'm curious to know if you were asked to review the forthcoming (already-out ?) "RESTful Web Services". I haven't read it, but have read Sam Ruby's blog over the years and am wondering how well his ideas map to your (i.e., the correct by definition) concept of REST. If that book becomes popular, which it seems it will be based on the response, it may become the working defintion of REST to many people, which, *if* it is missing important aspects of the definition, would be a shame.
I have argued that APP is "less-weblike" than HTML applications because you don't send documents in response to the instructions in a form. You build the documents according to a recipe you read in a spec, and submit them to urls you get by parsing hypertext. By agreement we know that you can POST an <entry> document to the url designited by rel=something. But on the web you get an HTML form from an URL and the form tells you how to form the submission and where to POST it. Since Roy will jump your shit if you start claiming your own personal preferences constitute REST, I just called it less web like, because you could argue that hey, we are following hyperlinks. Parsing HTML forms and POSTing to the action urls is more hypertexty than building predefined documents, but I didn't think I needed to pursue that argument. Hugh On 5/11/07, Nick Gall <nick.gall@...> wrote: > --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> > wrote: > > > > So, people who advocate "resource-oriented" as an end in itself, > > as if the only thing you need to build a network-based application > > is a URI template and resource definition language, are just fooling > > themselves into thinking what they are doing is REST. Resources > > are just one part of the style. Resources are necessary for REST, > > but resources must be allowed to evolve independently from the > > clients and the only way that can happen is when the clients are > > expecting to be instructed by the next representation received. > > So, since the XML result returned by the Atom Publishing Protocol does > not include hyperlinks with each entry for deleting that entry, or > updating it, or inserting a new entry (or some equivalent way of > representing the allowable state transitions), then APP is not fully > RESTful; since it does not comply with the "Hypermedia as the Engine > of Application State" (HEAS) constraint? > > I think I understand how REST works for UA2AA (User Agent to Automated > Agent) protocols. Are there any examples of AA2AA (Automated Agent to > Automated Agent) REST protocols in production that really do HEAS > right? Tim Ewald's hypothetical example of a flight reservation > protocol helped, but it's a toy example, not a real production example. > > -- Nick > > > > > Yahoo! Groups Links > > > > -- Hugh Winkler Wellstorm Development http://www.wellstorm.com/ +1 512 694 4795 mobile (preferred) +1 512 264 3998 office
* Hugh Winkler <hughw@...> [2007-05-11 22:25]: > I have argued that APP is "less-weblike" than HTML applications > because you don't send documents in response to the > instructions in a form. You build the documents according to a > recipe you read in a spec, and submit them to urls you get by > parsing hypertext. By agreement we know that you can POST an > <entry> document to the url designited by rel=something. Yes. This is obviously different from the situation on the web, where you build an application/x-www-form-urlencoded string according to a recipe you read in a spec, and by agreement you know you can post such a string to the URI designated by <form action>. Wait… :-) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
--- In rest-discuss@yahoogroups.com, John Panzer <jpanzer@...> wrote: > > APP defines <link rel="edit" href="..."/> in atom:entry elements for > deleting/updating entries. The service document defines <collection > href="..."/> for discovering collections (feeds) of entries, and > collections are also where you POST new entries. Is there something > missing? Wow, I did not know that about the APP! I must admit I've never given it a close reading. Now that I know about them, I do believe they satisfy the HEAS constraint ... for entries. But why is there not something similar for collections. I.e., shouldn't there be something like a <link rel="add" href=URL/> element in the representation of the collection itself? The URL would be the URL to POST to in order to add an new entry to the collection. -- Nick
Nick Gall wrote: > --- In rest-discuss@yahoogroups.com, John Panzer <jpanzer@...> wrote: > >> APP defines <link rel="edit" href="..."/> in atom:entry elements for >> deleting/updating entries. The service document defines <collection >> href="..."/> for discovering collections (feeds) of entries, and >> collections are also where you POST new entries. Is there something >> missing? >> > > Wow, I did not know that about the APP! I must admit I've never given > it a close reading. Now that I know about them, I do believe they > satisfy the HEAS constraint ... for entries. > > But why is there not something similar for collections. I.e., shouldn't > there be something like a <link rel="add" href=URL/> element in the > representation of the collection itself? The URL would be the URL to > POST to in order to add an new entry to the collection. > Well, for Atom the constraint is that the collection URL itself is the one that you use to POST to. So I _think_ that this is already available, but it's called "self": <link rel="self" .../>. The big gap that I see is between read-only syndication Atom feeds and corresponding editable collection feeds, where such a correspondence exists (it doesn't always). I don't think that rel="edit" is defined for feeds, just entries, but it would make sense... > -- Nick > > > > > > Yahoo! Groups Links > > > > -- Abstractioneer <http://feeds.feedburner.com/aol/SzHO>John Panzer System Architect http://abstractioneer.org
On 5/11/07, John Panzer <jpanzer@...> wrote: > > The big gap that I see is between read-only syndication Atom feeds and corresponding editable And such a separation is right there in the thesis paper, in section 6.2.3, paragraph 2: "In order to author an existing resource, the author must first obtain the specific source resource URI: ..." So if you can't find one from the other, that seems like an error to me. But what do I know. -- Robert Sayre
On 5/11/07, A. Pagaltzis <pagaltzis@...> wrote: > * Hugh Winkler <hughw@hughw.net> [2007-05-11 22:25]: > > I have argued that APP is "less-weblike" than HTML applications > > because you don't send documents in response to the > > instructions in a form. You build the documents according to a > > recipe you read in a spec, and submit them to urls you get by > > parsing hypertext. By agreement we know that you can POST an > > <entry> document to the url designited by rel=something. > > Yes. This is obviously different from the situation on the web, > where you build an application/x-www-form-urlencoded string > according to a recipe you read in a spec, and by agreement you > know you can post such a string to the URI designated by > <form action>. > > Wait :-) > Well, the body of an url-encoded form data request entity, that you send to one service URL, can vary over time, so it's easy to evolve it. Also, the entity is different for service A and service B supporting the same well defined semantics. Contrast to submitting application/atom+xml in which case you send the exact same entity in all cases. Hugh > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> > > > > Yahoo! Groups Links > > > > -- Hugh Winkler Wellstorm Development http://www.wellstorm.com/ +1 512 694 4795 mobile (preferred) +1 512 264 3998 office
* Hugh Winkler <hughw@...> [2007-05-12 00:55]: > Well, the body of an url-encoded form data request entity, that > you send to one service URL, can vary over time, so it's easy > to evolve it. Also, the entity is different for service A and > service B supporting the same well defined semantics. Contrast > to submitting application/atom+xml in which case you send the > exact same entity in all cases. a) atom:content can carry content of any MIME type. b) Entries can have any number of atom:link and atom:category elements with varying @rel or @scheme values (respectively). c) Namespaced extension elements. Sorry, unconvinced. I see no principal difference between the extensibility of application/x-www-form-urlencoded and application/atom+xml. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Wow. This thread, along with Tim Ewald's recent blog post on REST and
state machines has really clarified REST for me. At least I think it
has... Let me try to sum up my current thinking as follows:
With REST, the API is on the client.
When you design a RESTful protocol, you aren't specifying an interface
to be exposed by the server. Instead the client conforms to a
hypermedia format that drives its behavior. An important part of this
behavior is interacting with resources on the web (addressed by URIs)
using standard methods of a transfer protocol. These interactions
result in the client transitioning to another state by supplying it
with a new hypermedia document to process.
The specific resources and methods that make up the server-side
interface are not central to the protocol. It is only required that
the responses to the interactions and their effect the resources
conforms to the constraints defined by the standard method definitions
of the transfer protocol. Thus, in a RESTful protocol, the contract
between the client and the server is limited to the constraints of the
underlying standard transfer protocol. In addition to de-coupling the
client and server, it also allows for significant optimizations (e.g.
caching) to be implemented by the client and intermediaries.
Existing, standardized RESTful protocols include HTML (over HTTP), SVG
(over HTTP), VoiceXML (over HTTP) and APP. (APP is the only protocol
in this list that is not designed for driving a user interface -- I'm
not sure if there are others.)
Have I got it?
Andrew Wahbe
Has Microsoft's "Windows Live Contacts API" confused PUT and POST? [1] http://msdn2.microsoft.com/en-us/library/bb447763.aspx [2] http://msdn2.microsoft.com/en-us/library/bb463980.aspx Specifically, isn't [1] a fine example of how not to use PUT? The first example on [2] looks like a job for PUT, although POST is OK too, I guess. The example given at the bottom of [2] looks like a valid use of POST, but change method to PUT and you're back at square [1], right? -Eric [3] http://msdn2.microsoft.com/en-us/library/bb463982.aspx
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Eric" == Eric J Bowman <eric@...> writes:
Eric> Has Microsoft's "Windows Live Contacts API" confused PUT and
Eric> POST?
Absolutely. PUT makes sure the resource returned at the URL is the
resource just PUT to it. Its a full update/insert.
But I'm sure Microsoft will come out with a plugin for VisualStudio to
help confused developers make sense of it all.
- --
All the best,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGRU+WIyuuaiRyjTYRAtryAKCqALc3xECvdQ8h6KSkWOKn6vws0gCfV5iR
uf2cmml45KXGmTC+Lk2XF/g=
=rEOG
-----END PGP SIGNATURE-----
On 5/11/07, Eric J. Bowman <eric@...> wrote: > Has Microsoft's "Windows Live Contacts API" confused PUT and POST? > > [1] http://msdn2.microsoft.com/en-us/library/bb447763.aspx > [2] http://msdn2.microsoft.com/en-us/library/bb463980.aspx > > Specifically, isn't [1] a fine example of how not to use PUT? Well.... Yaron and I talked about exactly this issue, though I don't remember seeing that specific section in the documents I reviewed (the actual spec, which I reviewed, hasn't been published yet AFAIK). The referenced text above is written in a manner which makes it difficult to see exactly what's going on, as it prescribes restrictions on implementation rather than focusing on interface/message-semantics. It *can* be the case, within the constraints of REST, that the server chooses to set only the properties provided in the representation included in the PUT request, and leaves the other ones with their previous values (partial update). What's important from a REST POV is that the server (and intermediaries) understands that the client isn't requesting this to happen, and so the client should be just as happy if the request obliterated previous values, reset them to defaults, made them random values, or whatever... But [1] sure does look like it's redefining the meaning of PUT, so I hope that's fixed. I'm pretty certain the spec doesn't say that though, unless it was changed after my review. Mark.
On 5/12/07, Mark Baker <distobj@...> wrote: > > On 5/11/07, Eric J. Bowman <eric@...> wrote: > > Has Microsoft's "Windows Live Contacts API" confused PUT and POST? > > > > [1] http://msdn2.microsoft.com/en-us/library/bb447763.aspx > > [2] http://msdn2.microsoft.com/en-us/library/bb463980.aspx > > > > Specifically, isn't [1] a fine example of how not to use PUT? [1] is wrong (I didn't bother looking at [2]). Let's make up a document format to illustrate. You send a GET and receive this in response: <foo> <bar>1</bar> <baz>2</baz> </foo> so it seems that [1] says PUT <foo> <bar>1</bar> </foo> will leave <baz> with content of 2, rather than not present at all. That's pretty short-sighted, because it places application-specific requirements on the server. If PUT meant PUT, then you could just drop Apache in and the client would be none the wiser. -- Robert Sayre
On May 10, 2007, at 2:53 PM, Roy T. Fielding wrote: > BTW, I don't understand the triangle diagram. I have absolutely > no idea what that has to do with REST, or why it appears on the > wikipedia entry as well. YMMV. I've found it's a valuable means to explain REST to people with an RPC/Web services background: in a WS-* scenario, you have a fixed number of "endpoint" URLs, a variable number of data formats, and a variable number of interfaces. In REST, you have n resource URLs, n data formats, and a fixed number (i.e., 1) interface. It just helps to explain that there is nothing you can express using a WS/RPC approach that you can't express with a RESTful architecture as well - you just vary two of three parameters in both cases. I'm aware of the fact that it's not really correct (as resource URLs are not supposed to identify "command processors"), but it serves its purpose. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Regarding PUT, I'm slightly confused: On May 12, 2007, at 7:58 AM, Mark Baker wrote: > It *can* be the case, within the > constraints of REST, that the server chooses to set only the > properties provided in the representation included in the PUT request, > and leaves the other ones with their previous values (partial update). so what would be the assumptions an intermediary (such as a cache) could rely on? If I PUT something through a caching intermediary, can it cache and serve the representation that has been PUT instead of GETting it from the server? My reading of the spec is that a PUT must include the complete representation, although I've always wondered whether people actually do this in practice. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
I'm not trying to interpret Microsoft's API here, but it's similar to what I'm working on right now, in a way. So I want to make sure *I* have this right. Going back to [2], Microsoft's example uses: "<IsDefault>TrueFalse</IsDefault>" Let's say the first example on [2] is a PUT instead of a POST. Since it includes no <IsDefault> elements in the submission body, the server MAY add those elements and set their values. So a subsequent GET would retrieve a different representation than what was PUT. Nothing wrong with that, right? In the second example on [2], change the request like so: PUT (not POST) /livecontacts/contacts/contact(ContactID)/emails HTTP/1.1 <Emails> <Email> <EmailType>EmailType<EmailType> <IsDefault>TrueFalse</IsDefault> <Address>EmailAddress</Address> </Email> <Email> <EmailType>EmailType</EmailType> <Address>EmailAddress</Address> </Email> </Emails> The server MAY interpret that as intended, which is a change to the "default status" of the first <Email>. If the server assigned an <IsDefault> to the second <Email>, its removal on this PUT could reasonably be interpreted as "no change". On a subsequent GET, the <IsDefault> child of the second <Email> may still appear, unchanged. Without the second <Email> element in the submission body, the server SHOULD interpret the PUT as a change in the default status of the first <Email> AND a removal of the second <Email>. Obviously, if the second <Email> is removed leaving only one in the list, and the PUT attempts to set <IsDefault>False</IsDefault> on the first <Email>, then the server should be free to set <IsDefault>True</IsDefault> and override the client's attempt to break the server logic, right? If I'm on the right track, that would mean Microsoft's error in [1] is not removing elements omitted from a PUT. But I have just shown a perfectly reasonable (I think) use case showing just that. I am unable to articluate the difference between my understanding of PUT and what appears to be a mistake in Microsoft's implementation. Anyone? -Eric [1] http://msdn2.microsoft.com/en-us/library/bb447763.aspx [2] http://msdn2.microsoft.com/en-us/library/bb463980.aspx
Ittay Dror wrote: > If so, wouldn't a client of a true REST implementation still need to > understand the representation to know what links to follow: say a > shopping cart's representation is paged, so each chunk has also a 'next' > and 'prev' links. A client would need to know their semantics in order > to use them correctly, right? > > > Also, what about when a client doesn't want to just view (GET) the > state, but also to update it, how can he know how data should be sent to > the server, just based on representation? That's not a problem specific to the REST style, or even computing. What it does it put that problem in an appropriate place. "What about meaning ?" is the same kind of question as "what about latency?". Another way of looking at it: just because the REST style does a decent job of managing the related complexity of interlingua, even going as far as shining some light on it, it doesn't mean the associated complexity is causal to the style. It would help if we dropped the term "self-description" in favor of something less aggrandizing. Anyone with a basic grasp of formal languages or mathematical logic knows self-description is the linguistic analog of perpetual motion. cheers Bill
Hugh Winkler wrote: > > > I have argued that APP is "less-weblike" than HTML applications > because you don't send documents in response to the instructions in a > form. You build the documents according to a recipe you read in a > spec, and submit them to urls you get by parsing hypertext. You mean like how you can read the recipe for interacting with a form in I don't know, is it 3, specs? > By > agreement we know that you can POST an <entry> document to the url > designited by rel=something. But on the web you get an HTML form from > an URL and the form tells you how to form the submission and where to > POST it. The form tells you no such thing. What you are ascribing to forms is a kind of gensym fallacy rather than any improved expressive power. See my post elsewhere about self-description. IOW, I'd like to see the argument that says forms are in a different language class than APP. cheers Bill
Jeffrey Winter wrote: > If that book becomes popular, which it seems it will be based on > the response, it may become the working defintion of REST to many > people, which, *if* it is missing important aspects of the > definition, would be a shame. In terms of authoritative import, the coup will be getting Dr. Fielding to write the foreword. cheers Bill
Stefan Tilkov wrote: > > > On May 10, 2007, at 2:53 PM, Roy T. Fielding wrote: > > > BTW, I don't understand the triangle diagram. I have absolutely > > no idea what that has to do with REST, or why it appears on the > > wikipedia entry as well. YMMV. > > I've found it's a valuable means to explain REST to people with an > RPC/Web services background: in a WS-* scenario, you have a fixed > number of "endpoint" URLs, a variable number of data formats, and a > variable number of interfaces. In REST, you have n resource URLs, n > data formats, and a fixed number (i.e., 1) interface. It just helps > to explain that there is nothing you can express using a WS/RPC > approach that you can't express with a RESTful architecture as well - > you just vary two of three parameters in both cases. > > I'm aware of the fact that it's not really correct (as resource URLs > are not supposed to identify "command processors"), but it serves its > purpose. Something similar. I find to get things going, you can tell people that resources are just like objects, but they have tightly controlled interfaces. That's because you don't know who your callers are going to be, and you have little say in when they upgrade, so revving the API later on is tough. Then I tell them what's wrong by design with many exposed web systems and frameworks today is that they put all the objects behind manager and controller classes, which inevitably introduces problems. People with a grasp of OOAD or domain specific modeling can follow that argument. The hard bit then is persuading people not to pretend the client and server code are in the same address space when it comes to programming or you'll unintentionally couple the client and server. [As a result part of me thinks that JSR311 not necessarily providing a client API is a good thing.] But this is just repeating Steve Loughran's point that lots of people do not wanting to do distributed programming, when you get right down to it. cheers Bill
under.bluewaters wrote: > /plotAbundance?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC > > Basically, its an RPC call that accepts and arbitrary number of sites > and species. I really can't imagine a scheme for doing this with rest. > The only thing I could think of is POST+redirect, and cache the image > that it is redirected to. Unfortunately, you can't really encode a > POST into an img tag. I would have to do some ajax calls and insert a > link to the redirect location. Make it a query, declare victory. Check this out: http://bitworking.org/projects/sparklines/ cheers Bill
Josh Sled wrote: > On Fri, 2007-05-11 at 15:21 +0000, under.bluewaters wrote: >> Ok, now imagine that I want to allow users to create a plot comparing >> multiple species and multiple sites! As an RPC call, it would look >> something like this: >> >> /plotAbundance?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC > > What if you change 'plot' from a verb to a noun? Josh is right, but it's more than the signature. If you treat plotAbundance as a way to ask about a coordinate space, then it's mathematically elegant, functional elegant, scalable, trivially parallelisable, and all that. It's how you're thinking about what's behind the curtain that matters. I really wouldn't get hung up on the URI signature, except to say I'd avoid imposing an artificial hierarchy when a relational style will do fine. cheers Bill
On 5/11/07, A. Pagaltzis <pagaltzis@...> wrote: > * Hugh Winkler <hughw@...> [2007-05-12 00:55]: > > Well, the body of an url-encoded form data request entity, that > > you send to one service URL, can vary over time, so it's easy > > to evolve it. Also, the entity is different for service A and > > service B supporting the same well defined semantics. Contrast > > to submitting application/atom+xml in which case you send the > > exact same entity in all cases. > > a) atom:content can carry content of any MIME type. > > b) Entries can have any number of atom:link and atom:category > elements with varying @rel or @scheme values (respectively). > > c) Namespaced extension elements. > > Sorry, unconvinced. > > I see no principal difference between the extensibility of > application/x-www-form-urlencoded and application/atom+xml. The prinicple difference is that for url encoded form data, the server described to you the information it needs to perform an action. In the app case, or any document exchange case, you have to have both a great versioning for the document, and you have to be willing to say when you've run out of string in versioning, and invent a new rel="xxx" designating a uri that accepts eg atom2+xml. Hypothesis: HTML forms + application/x-www-form-urlencoded is to pre-agreed document formats, as GET/PUT/POST/DELETE is to arbitrary verbs. Hugh
On 5/12/07, Bill de hOra <bill@...> wrote: > Hugh Winkler wrote: > > > > > > I have argued that APP is "less-weblike" than HTML applications > > because you don't send documents in response to the instructions in a > > form. You build the documents according to a recipe you read in a > > spec, and submit them to urls you get by parsing hypertext. > > You mean like how you can read the recipe for interacting with a form in > I don't know, is it 3, specs? > Unsure if you mean there are 3 specs that define from fields semantics? Or just 3 specs touching on HTML forms? > > By > > agreement we know that you can POST an <entry> document to the url > > designited by rel=something. But on the web you get an HTML form from > > an URL and the form tells you how to form the submission and where to > > POST it. > > The form tells you no such thing. What you are ascribing to forms is a > kind of gensym fallacy rather than any improved expressive power. See my > post elsewhere about self-description. IOW, I'd like to see the argument > that says forms are in a different language class than APP. > Running out of gas here, Bill... got a link I can look at? > cheers > Bill >
Duncan Cragg wrote: > Having said all that, I would always go for POST-redirect, and find > a way to get the resultant cacheable, opaque URI into your img tag. There's no reason to use POST. You can perfectly easily have the GET result in a redirect if what you're trying to achieve is an opaque URL without a query-string, since the operation is safe and idempotent.
I just recently gave a very short presentation (10 minutes) on REST
and RDF at NetBeans day during James Gosling's closing presentation
to 1000 developers [1], and a longer one hour discussion during a BOF
attended by over 250 people at JavaOne [2].
The message is simple: RDF and REST form a perfect mesh. URI's name
Resources which have multiple representations, REST of course stands
for Representation state transfer, and RDF is the simplest possible
way to describe Resources. They are three parts of the triangle. Of
course I don't have to convince anyone on this list of the power,
simplicity and clarity of REST. I do urge RESTafarians though to
start look at RDF more carefully (not necessarily the xml version) as
the other side of the task they are endeavoring to accomplish.
Only the simplest possible thing on the web can work. URIs, REST and
RDF are each perfections of simplicity that mesh together perfectly.
As an example, try describing a web resource using RDF. If you find
it difficult to do, it's probably that the application is badly
architected. So for example on dev.java.net all users have the same
start page url
https://www.dev.java.net/servlets/StartPage
so what is that page really naming? How is one going to describe it ?
One needs to describe it using some blank node such as
[] :representationOf <StartPage>;
:ownedBy <http://bblfish.net/people/henry/card#me>;
....
ie, there is no way to refer to the resource uniformly. The same is
true of xmlrpc or soap messages. Of course the correct way to set
this up would be to give every person their own start page
@prefix sioc: <http://rdfs.org/sioc/ns#>
<https://www.dev.java.net/people/bblfish> a sioc:User ;
sioc:email <mailto:henry.story@...> .
By this simple exercise I believe you can quickly spot and explain
the problem with badly architected web applications.
Henry
[1] http://blogs.sun.com/bblfish/entry/dropping_some_doap_into_netbeans
[2] http://blogs.sun.com/bblfish/entry/semantic_web_birds_of_a1
Home page: http://bblfish.net/
Sun Blog: http://blogs.sun.com/bblfish/
Foaf name: http://bblfish.net/people/henry/card#me
Hi all,
I'm new to this group and fairly new to REST too. So,
what I'm about to
ask may have been discussed here before, but please,
bear with me :-)
Let's assume that we have a following situation:
(I'll write the key terms CAPITALIZED for better
visibility, I'm not
yelling :-) )
There is a RESTful APPLICATION available on the web
and a client is
using it (human client through web browser, to be
precise).
Client has loaded the home page and therefore client
can see the
REPRESENTATION of the CURRENT STATE of APPLICATION.
Client can initiate TRANSITION (by means of selecting
one of HYPERMEDIA
LINKS contained within the REPRESENTATION of the
current state) of the APPLICATION to another STATE.
LINK contains (is?)
URI which identifies one of application's RESOURCES.
By means of that URI and selected OPERATION derived
from UNIFORM
INTERFACE (GET, PUT, ...) that RESOURCE is being
manipulated with, meaning that ANY number of ACTIONS
are being initiated
"behind the scenes" on server. Those ACTIONS
will bring APPLICATION to NEW STATE and our client
will receive the
REPRESENTATION of some RESOURCE which describes that
NEW (but now CURRENT) STATE of APPLICATION.
And on we go again, based on new links in that new
representation.
Well, am I getting it right?
Cheers,
Miran
____________________________________________________________________________________Get the free Yahoo! toolbar and rest assured with the added security of spyware protection.
http://new.toolbar.yahoo.com/toolbar/features/norton/index.php
Hi,
I want to take another stab at understanding the finer details of good REST design. I first tried posting a question on the 'REST intro slides' thread, but got no response.
I'm trying to
understand what is exactly good REST design, esp. the 'hypermedia'
part of it.
Take the case of the
shopping cart. There are several ways to implement it. Am I right in
stating that:
* "REST-like" implementation will return a representation of the shopping cart as a list of catalog numbers of items:
* "almost-REST" implementation will return a list of URIs of the items.
* REST implementation will return a list of links where the link targets are the items
Thanks,
Ittay
On 12/05/07, Henry Story <henry.story@...> wrote: > [...] > [] :representationOf <StartPage>; > :ownedBy <http://bblfish.net/people/henry/card#me>; > .... representationOf reminds me of "Resource and Representation Relationship Vocabulary" from http://www.hackcraft.net/rep/rep.xml Did anyone else developed a vocabulary to address RESTful things? Cheers, -- Laurian Gridinoc, purl.org/net/laur
On Sun, 2007-05-13 at 10:32 +0300, Ittay Dror wrote:
> * "almost-REST" implementation will return a list of URIs of the
> items.
[...]
> <item>http://example.org/items/1342455</item>
> * REST implementation will return a list of links where the link
> targets are the items
[...]
> <item><a href="http://example.org/items/1342455"/></item>
I don't see a distinction between putting the URL in the text vs. in an
attribute that makes one "almost-REST" and one "REST". It's more
important to use the links at all, and that the media type is well-known
and understood by clients.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
This is still RPC. Even if you are breaking the data down into
multiple linked documents -- you're still just passing data back from
a remote procedure call. You need to constrain the transitions that
the client can take from this state.
For example:
<store>
<shopping-cart href="http://example.org/carts/123abc">
<item href="http://example.org/carts/123abc/item1">
<description href="http://example.org/catalog/1234"/>
</item>
<item href="http://example.org/carts/123abc/item2">
<description href="http://example.org/catalog/5678"/>
</item>
</shopping-cart>
<checkout href="http://example.org/checkout"/>
</store>
Here, you not only have the data but URIs of every possible transition
from this point.
- You can add new things by POSTing them to
http://example.org/carts/123abc
- You can view/change/delete an item by GET/PUT/DELETE of
http://example.org/carts/123abc/item[1|2]
- You can also view the description of the item by GET of
http://example.org/catalog/[1234|5678]
- You can checkout by posting the cart to http://example.org/checkout
There is no published Server API (a la WSDL or WADL). There is a well
defined document format that says that given the above document you
get the options that I've listed. As I was trying to say here:
http://tech.groups.yahoo.com/group/rest-discuss/message/8411 , the API
is on the client.
On May 11, 2007, at 1:02 PM, Jeffrey Winter wrote: > I'm curious to know if you were asked to review the forthcoming > (already-out ?) "RESTful Web Services". I haven't read it, but > have read Sam Ruby's blog over the years and am wondering how > well his ideas map to your (i.e., the correct by definition) > concept of REST. I was asked to review an early draft, but I don't know how far that was (or is) from their current draft. I really don't have the time. Sam's a smart guy -- he'll figure it out once he has to explain it. > If that book becomes popular, which it seems it will be based on > the response, it may become the working defintion of REST to many > people, which, *if* it is missing important aspects of the > definition, would be a shame. *shrug* I don't see how that is any different from any of the other technologies depicted on bookshelves everywhere. The book isn't the one that I would write, but then neither are any of the books on HTTP. ....Roy
* Ittay Dror <ittayd@...> [2007-05-13 09:35]:
> * "almost-REST" implementation will return a list of URIs of the items.
>
> <shopping-cart>
> <items>
> <item>http://example.org/items/1342455</item>
> <item>http://example.org/items/4365456</item>
> </items>
> </shopping-cart>
>
> * REST implementation will return a list of links where the link targets are the items
>
> <shopping-cart>
> <items>
> <item><a href="http://example.org/items/1342455"/></item>
> <item><a href="http://example.org/items/4365456"/></item>
> </items>
> </shopping-cart>
You seem to be confused about what a link is. “Link” doesn’t mean
“HTML <a> element.” FWIW, these are all links:
<img src="http://example.org/items/123456/detail.jpg">
<link rel="stylesheet" type="text/css" href="http://example.org/main.css">
<link rel="alternate" href="http://example.org/news/what-is-a-link" />
<content src="http://example.org/talks/what-is-rest.mp4"/>
<enclosure>http://example.org/talks/what-is-rest.mp4</enclosure>
<link>http://example.org/news/what-is-a-link</link>
A link is anything that an application understands to be a URI.
If the application that consumes your shopping cart XML
understands the content model of your <item> elements to be
derefencable URIs, then your “almost-REST” example is just as
RESTful as the other.
However, what is missing from your example is any links that the
client can follow in order to manipulate the shopping cart. It
might be that the client manipulates the cart by PUTting or
POSTing to the same URI where it retrieved the shopping cart
representation. In that case, what you have shown is RESTful.
If the application must use different URIs to manipulate the
cart, however, then these URIs must be included in the
representation so that the client can follow links instead of
constructing URIs.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
Another consideration that should be discussed is the interaction of partial updates using XML representations with any required-ness in schema definitions. If the designers choose to use an XML schema that defines required elements, then the partial update may not conform to that particular schema. There are several routes out of this situation but I didn't see them discussed in the MS pages referenced below. > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Robert Sayre > Sent: Friday, May 11, 2007 11:09 PM > To: Mark Baker > Cc: eric@...; rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Bass-ackwards? > > On 5/12/07, Mark Baker <distobj@...> wrote: > > > > On 5/11/07, Eric J. Bowman <eric@...> wrote: > > > Has Microsoft's "Windows Live Contacts API" confused PUT > and POST? > > > > > > [1] http://msdn2.microsoft.com/en-us/library/bb447763.aspx > > > [2] http://msdn2.microsoft.com/en-us/library/bb463980.aspx > > > > > > Specifically, isn't [1] a fine example of how not to use PUT? > > [1] is wrong (I didn't bother looking at [2]). Let's make up > a document format to illustrate. You send a GET and receive this in > response: > > <foo> > <bar>1</bar> > <baz>2</baz> > </foo> > > so it seems that [1] says > > PUT > > <foo> > <bar>1</bar> > </foo> > > will leave <baz> with content of 2, rather than not present at all. > That's pretty short-sighted, because it places > application-specific requirements on the server. If PUT meant > PUT, then you could just drop Apache in and the client would > be none the wiser. > > -- > > Robert Sayre > > > > Yahoo! Groups Links > > >
* under.bluewaters <chad@...> [2007-05-11 17:30]: > As an RPC call, it would look something like this: > > /plotAbundance?site1=HAZARDS&site2=ANACAPA&species1=PCLA&species2=CNIC > > Basically, its an RPC call that accepts and arbitrary number of > sites and species. I really can't imagine a scheme for doing > this with rest. You seem to be confusing “REST” with “cool URIs”. The two are only distantly related. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
[ Attachment content not displayed ]
On 5/12/07, Stefan Tilkov <stefan.tilkov@...> wrote: > Regarding PUT, I'm slightly confused: > > On May 12, 2007, at 7:58 AM, Mark Baker wrote: > > > It *can* be the case, within the > > constraints of REST, that the server chooses to set only the > > properties provided in the representation included in the PUT request, > > and leaves the other ones with their previous values (partial update). > > so what would be the assumptions an intermediary (such as a cache) > could rely on? Just the definition of PUT. > If I PUT something through a caching intermediary, can > it cache and serve the representation that has been PUT instead of > GETting it from the server? Sure. I haven't seen a cache that does it though. > My reading of the spec is that a PUT must include the complete > representation, although I've always wondered whether people actually > do this in practice. PUT requests always include the complete representation by definition because the message is always self-descriptive about its intent. But the server has latitude beyond that meaning. Perhaps an example... PUT /lightbulb HTTP/1.0 Content-Type: application/lightbulb+xml <bulb> <state>off</state> </bulb> That message requests that the lightbulb be turned off. A subsequent GET might return; <bulb> <state>off</state> <temp>150</temp> </bulb> The point is that you can't interpret the initial PUT request as "set state=off and temp=0". It just means "state=off", and the server is free to do what it wants with "temp". At least, that's how I understand it. Mark.
On 5/13/07, Mark Baker <distobj@...> wrote: > > The point is that you can't interpret the initial PUT request as "set > state=off and temp=0". It just means "state=off", and the server is > free to do what it wants with "temp". At least, that's how I > understand it. Is that right? I think it is, as long as the media type allows GET requests to respond with HTTP/1.1 200 OK Content-Type: application/lightbulb+xml <bulb> <state>off</state> </bulb> if the the protocol /expects/ that <temp> will be re-inserted by the server, that's a little different, because it would require relatively detailed knowledge of the media type in order accept a PUT request, right? Is it OK to extend PUT by requiring servers to re-insert fields required by a schema? -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
* Mark Baker <distobj@...> [2007-05-14 05:10]: > On 5/12/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > If I PUT something through a caching intermediary, can it > > cache and serve the representation that has been PUT instead > > of GETting it from the server? > > Sure. I haven't seen a cache that does it though. Is that really the case? My understanding is that the server is not required return a bit-for-bit copy or even a semantically identical copy of the PUT request body on a subsequent GET, so how can it be OK for an intermediary to serve that PUT body without checking with the origin server? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote: > > > * Mark Baker <distobj@acm. org <mailto:distobj%40acm.org>> [2007-05-14 > 05:10]: > > On 5/12/07, Stefan Tilkov <stefan.tilkov@ innoq.com > <mailto:stefan.tilkov%40innoq.com>> wrote: > > > If I PUT something through a caching intermediary, can it > > > cache and serve the representation that has been PUT instead > > > of GETting it from the server? > > > > Sure. I haven't seen a cache that does it though. > > Is that really the case? My understanding is that the server is > not required return a bit-for-bit copy or even a semantically > identical copy of the PUT request body on a subsequent GET, so > how can it be OK for an intermediary to serve that PUT body > without checking with the origin server? I would think all an intermediary should do is invalidate that representation. But this is certainly a good candidate for clarification in a potential revision of RFC2616. Best regards, Julian
On 5/14/07, A. Pagaltzis <pagaltzis@...> wrote: > * Mark Baker <distobj@...> [2007-05-14 05:10]: > > On 5/12/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > > If I PUT something through a caching intermediary, can it > > > cache and serve the representation that has been PUT instead > > > of GETting it from the server? > > > > Sure. I haven't seen a cache that does it though. > > Is that really the case? My understanding is that the server is > not required return a bit-for-bit copy or even a semantically > identical copy of the PUT request body on a subsequent GET, so > how can it be OK for an intermediary to serve that PUT body > without checking with the origin server? Well, the intermediary would certainly need to also see a 2xx response to the PUT request before making that representation available, which is a form of "checking with the origin server". Mark.
On 5/14/07, Robert Sayre <sayrer@...> wrote: > On 5/13/07, Mark Baker <distobj@...> wrote: > > > > The point is that you can't interpret the initial PUT request as "set > > state=off and temp=0". It just means "state=off", and the server is > > free to do what it wants with "temp". At least, that's how I > > understand it. > > Is that right? I think it is, as long as the media type allows GET > requests to respond with > > HTTP/1.1 200 OK > Content-Type: application/lightbulb+xml > > <bulb> > <state>off</state> > </bulb> > > if the the protocol /expects/ that <temp> will be re-inserted by the > server, that's a little different, because it would require relatively > detailed knowledge of the media type in order accept a PUT request, > right? Sounds right. > Is it OK to extend PUT by requiring servers to re-insert fields > required by a schema? Hmm. Why would that need to be part of the protocol? As long as the client understands the media type and the meaning of PUT, and the representation contains sufficient information for PUT to be performed, wouldn't all be well? Mark.
* Mark Baker <distobj@...> [2007-05-14 14:00]: > On 5/14/07, A. Pagaltzis <pagaltzis@...> wrote: > > * Mark Baker <distobj@...> [2007-05-14 05:10]: > > > On 5/12/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > > > If I PUT something through a caching intermediary, can it > > > > cache and serve the representation that has been PUT > > > > instead of GETting it from the server? > > > > > > Sure. I haven't seen a cache that does it though. > > > > Is that really the case? My understanding is that the server > > is not required return a bit-for-bit copy or even a > > semantically identical copy of the PUT request body on a > > subsequent GET, so how can it be OK for an intermediary to > > serve that PUT body without checking with the origin server? > > Well, the intermediary would certainly need to also see a 2xx > response to the PUT request before making that representation > available which is a form of "checking with the origin server". Obviously if the intermediary saw anything other than a 2xx, it couldn’t cache the request body. But even if it did see a 2xx response, the semantics of PUT (namely, that the origin server may do anything it wants with the request body) would seem to absolutely preclude caching by intermediaries. It would arguably be fine to cache the *response* body IFF the status was 200, but I can see no case in which it is OK to cache the *request* body. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Mark Baker wrote: > > > On 5/14/07, A. Pagaltzis <pagaltzis@gmx. de <mailto:pagaltzis%40gmx.de>> > wrote: > > * Mark Baker <distobj@acm. org <mailto:distobj%40acm.org>> > [2007-05-14 05:10]: > > > On 5/12/07, Stefan Tilkov <stefan.tilkov@ innoq.com > <mailto:stefan.tilkov%40innoq.com>> wrote: > > > > If I PUT something through a caching intermediary, can it > > > > cache and serve the representation that has been PUT instead > > > > of GETting it from the server? > > > > > > Sure. I haven't seen a cache that does it though. > > > > Is that really the case? My understanding is that the server is > > not required return a bit-for-bit copy or even a semantically > > identical copy of the PUT request body on a subsequent GET, so > > how can it be OK for an intermediary to serve that PUT body > > without checking with the origin server? > > Well, the intermediary would certainly need to also see a 2xx response > to the PUT request before making that representation available, which > is a form of "checking with the origin server". As far as I understand, an intermediate has no way to predict a future GET response based on the PUT request body. Unless it has additional information indicating what the server did with the entity body (see <http://greenbytes.de/tech/webdav/draft-reschke-http-etag-on-write-05.html>). Best regards, Julian
Roy clarified: > The book isn't the > one that I would write, but then neither are any of the books on HTTP. I would pay good coin for to have a book by Roy Fielding in my library! ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
Miran, I think your explanation is correct and conistent with REST. However, in my personal opinion (some may disagree) it is also important for RESTful application to store its state on the client rather than on the server. That provides lots of benefits in real world implementation. Cheers, Hovhannes --- In rest-discuss@yahoogroups.com, Pero Peric <zlac53@...> wrote: > > Hi all, > > I'm new to this group and fairly new to REST too. So, > what I'm about to > ask may have been discussed here before, but please, > bear with me :-) > > Let's assume that we have a following situation: > (I'll write the key terms CAPITALIZED for better > visibility, I'm not > yelling :-) ) > > There is a RESTful APPLICATION available on the web > and a client is > using it (human client through web browser, to be > precise). > Client has loaded the home page and therefore client > can see the > REPRESENTATION of the CURRENT STATE of APPLICATION. > Client can initiate TRANSITION (by means of selecting > one of HYPERMEDIA > LINKS contained within the REPRESENTATION of the > current state) of the APPLICATION to another STATE. > LINK contains (is?) > URI which identifies one of application's RESOURCES. > By means of that URI and selected OPERATION derived > from UNIFORM > INTERFACE (GET, PUT, ...) that RESOURCE is being > manipulated with, meaning that ANY number of ACTIONS > are being initiated > "behind the scenes" on server. Those ACTIONS > will bring APPLICATION to NEW STATE and our client > will receive the > REPRESENTATION of some RESOURCE which describes that > NEW (but now CURRENT) STATE of APPLICATION. > And on we go again, based on new links in that new > representation. > > > Well, am I getting it right? > > > Cheers, > Miran > > > > > > ______________________________________________________________________ ______________Get the free Yahoo! toolbar and rest assured with the added security of spyware protection. > http://new.toolbar.yahoo.com/toolbar/features/norton/index.php >
On 5/14/07, Julian Reschke <julian.reschke@...> wrote: > As far as I understand, an intermediate has no way to predict a future > GET response based on the PUT request body. As I see it, if a server accepts a representation via PUT, then that representation *is* a representation of the targetted resource. It may not be one that would ever be returned via GET, but it is still one. Consider that if the proxy/origin-server link died right after the PUT succeeded, then the PUT request representation could be used as an estimate of the state of the targetted resource. That doesn't seem unreasonable to me. Anyhow, this is mostly just conjecture on my part based on my understanding of the model. I'd want to run through some use cases before I deployed software which made these assumptions. Mark.
Mark Baker wrote: > > > On 5/14/07, Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> wrote: > > As far as I understand, an intermediate has no way to predict a future > > GET response based on the PUT request body. > > As I see it, if a server accepts a representation via PUT, then that > representation *is* a representation of the targetted resource. It Yep. > may not be one that would ever be returned via GET, but it is still > one. Consider that if the proxy/origin- server link died right after > the PUT succeeded, then the PUT request representation could be used > as an estimate of the state of the targetted resource. That doesn't > seem unreasonable to me. If you do so, what ETag and Last-Modified headers do you serve it with? How long do you keep that entry? It seems to me that this is an attempt to optimize a write operation at the risk of negatively affecting reads. > Anyhow, this is mostly just conjecture on my part based on my > understanding of the model. I'd want to run through some use cases > before I deployed software which made these assumptions. How would you make sure that your tests include the "right" set of combinations of clients, intermediaries and servers? Best regards, Julian
[ Attachment content not displayed ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Chad" == Chad Burt <chad@...> writes:
Chad> It's an easy way of tying all the RESTful crud actions into
Chad> some good URI conventions. It puts you down this road of
Chad> thinking that REST is about public url transparency. I get
Chad> the impression, and in fact believe myself that URI
Chad> transparency is a good thing. It's just that it is not a
Chad> component of REST.
I believe that mapping your URLs is a cornerstone of REST practise. It
really helps you to think about your application and what resources
there actually are.
I don't believe someone who considers URL to be just opaque strings
will have a good chance of developing a good REST application.
- --
All the best,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGSLc1IyuuaiRyjTYRAhgIAKCbCXmW9ZjxrHykexRkHspT+mY9xwCdHMGP
bk5WfXa/L6nTwSW/fXdHsD8=
=254O
-----END PGP SIGNATURE-----
Not much difference between 2 and 3, as others have stated.
The simplest way to do this is to create, or even better find a well
known rdf ontology for the concepts you are using. It usually falls
out very clearly what to do then.
In N3 you would write
<> a :ShoppingCart;
:contains <http://example.org/items/1342455>;
:contains <http://example.org/items/4365456> .
If you feel it would be better to give the client somewhat more
information about your items, in order to avoid it having to make
another http call, you could add that to the message with
<http://example.org/items/1342455> a :Book;
:title "Begriffsschrift";
:author <http://en.wikipedia.org/wiki/Frege#p> .
<http://example.org/items/4365456> a :CD;
:title "The Dark Side of the Moon";
:band <http://en.wikipedia.org/wiki/Pink_floyd#b> .
Then since you want to be able to interact with as many clients as
possible, many of whome don't have good rdf tools, you would want to
create an xml crystallization of the rdf graph [1], so that they can
use the simple DOM tools they have at their disposal.
Using rdf makes it much easier to define the vocabulary you are
working with. You will probably find a good ontology that suits your
needs allready out there. The rdf crystalisation is a little more
tricky, but not more tricky that creating good xml to start off with.
Henry
[1] http://blogs.sun.com/bblfish/entry/crystalizing_rdf
On 13 May 2007, at 00:32, Ittay Dror wrote:
>
> Hi,
>
>
>
> I want to take another stab at understanding the finer details of
> good REST design. I first tried posting a question on the 'REST
> intro slides' thread, but got no response.
>
>
>
> I'm trying to understand what is exactly good REST design, esp. the
> 'hypermedia' part of it.
>
>
>
> Take the case of the shopping cart. There are several ways to
> implement it. Am I right in stating that:
>
> * "REST-like" implementation will return a representation of the
> shopping cart as a list of catalog numbers of items:
>
> <shopping-cart>
>
> <items>
>
> <item>1342455</item>
>
> <item>4365456</item>
>
> </items>
>
> </shopping-cart>
>
>
> * "almost-REST" implementation will return a list of URIs of the
> items.
>
> <shopping-cart>
>
> <items>
>
> <item>http://example.org/items/1342455</item>
>
> <item>http://example.org/items/4365456</item>
>
> </items>
>
> </shopping-cart>
>
>
>
>
> * REST implementation will return a list of links where the link
> targets are the items
>
> <shopping-cart>
>
> <items>
>
> <item><a href="http://example.org/items/1342455"/></item>
>
> <item><a href="http://example.org/items/4365456"/></item>
>
> </items>
>
> </shopping-cart>
>
>
>
>
>
>
> Thanks,
>
> Ittay
>
>
>
> --
> Ittay Dror
> Chief Architect,
> R&D, Qlusters Inc.
> Web: qlusters.com
> Email: ittayd@...
> Phone: +972-3-6081994
>
> openQRM - Data Center Provisioning
> ------
> I own this number: D0E008A921FF04A9DB8C12668E4315F4. Get your own
> athttp://www.freedom-to-tinker.com
>
>
On 5/14/07, Julian Reschke <julian.reschke@...> wrote: > Mark Baker wrote: > > > > > > On 5/14/07, Julian Reschke <julian.reschke@ gmx.de > > <mailto:julian.reschke%40gmx.de>> wrote: > > > As far as I understand, an intermediate has no way to predict a future > > > GET response based on the PUT request body. > > > > As I see it, if a server accepts a representation via PUT, then that > > representation *is* a representation of the targetted resource. It > > Yep. > > > may not be one that would ever be returned via GET, but it is still > > one. Consider that if the proxy/origin- server link died right after > > the PUT succeeded, then the PUT request representation could be used > > as an estimate of the state of the targetted resource. That doesn't > > seem unreasonable to me. > > If you do so, what ETag and Last-Modified headers do you serve it with? > How long do you keep that entry? Last-Modified would be the time the 2xx response was received. ETag wouldn't be set for the scenario I had in mind (non-surrogate proxy). > > It seems to me that this is an attempt to optimize a write operation at > the risk of negatively affecting reads. Could be. > > Anyhow, this is mostly just conjecture on my part based on my > > understanding of the model. I'd want to run through some use cases > > before I deployed software which made these assumptions. > > How would you make sure that your tests include the "right" set of > combinations of clients, intermediaries and servers? *shrug* Mark.
[Hmm, I thought had sent this mail to the list. Apparently not.]
* Pero Peric <zlac53@...> [2007-05-13 01:30]:
> Client has loaded the home page and therefore client can
> see the REPRESENTATION of the CURRENT STATE of APPLICATION.
I'd call this the initial state.
> Well, am I getting it right?
It looks right. But you seem to be thinking of "state" as the
current state of the server -- when actually the state is on
the client (and only on the client).
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
the server maintains a state in the sense that the server knows the contents of the client's shopping cart at all time. however, the state of interaction (protocol) with the server is kept in the client. for example, when checking out, the client and server don't begin a long transaction of exchanging messages (sending address, then credit card, etc.). instead, either the whole information is given to the server in one go, or, each information changes the representation. in the above example, it could mean there's a checkout resource that has all details or references other resources.
ittay
hovhannes_tumanyan
wrote on 05/14/07 18:13:
Miran,
I think your explanation is correct and conistent with REST. However,
in my personal opinion (some may disagree) it is also important for
RESTful application to store its state on the client rather than on
the server. That provides lots of benefits in real world
implementation.
Cheers,
Hovhannes
--- In rest-discuss@yahoogroups. , Pero Periccom wrote:
>
> Hi all,
>
> I'm new to this group and fairly new to REST too. So,
> what I'm about to
> ask may have been discussed here before, but please,
> bear with me :-)
>
> Let's assume that we have a following situation:
> (I'll write the key terms CAPITALIZED for better
> visibility, I'm not
> yelling :-) )
>
> There is a RESTful APPLICATION available on the web
> and a client is
> using it (human client through web browser, to be
> precise).
> Client has loaded the home page and therefore client
> can see the
> REPRESENTATION of the CURRENT STATE of APPLICATION.
> Client can initiate TRANSITION (by means of selecting
> one of HYPERMEDIA
> LINKS contained within the REPRESENTATION of the
> current state) of the APPLICATION to another STATE.
> LINK contains (is?)
> URI which identifies one of application's RESOURCES.
> By means of that URI and selected OPERATION derived
> from UNIFORM
> INTERFACE (GET, PUT, ...) that RESOURCE is being
> manipulated with, meaning that ANY number of ACTIONS
> are being initiated
> "behind the scenes" on server. Those ACTIONS
> will bring APPLICATION to NEW STATE and our client
> will receive the
> REPRESENTATION of some RESOURCE which describes that
> NEW (but now CURRENT) STATE of APPLICATION.
> And on we go again, based on new links in that new
> representation.
>
>
> Well, am I getting it right?
>
>
> Cheers,
> Miran
>
>
>
>
>
>
_____________________ _________ _________ _________ _________ _
______________Get the free Yahoo! toolbar and rest assured with the
added security of spyware protection.
> http://new.toolbar.yahoo.com/ toolbar/features /norton/index. php
>
--- In rest-discuss@yahoogroups.com, "A. Pagaltzis" <pagaltzis@...> wrote:
>
> [Hmm, I thought had sent this mail to the list. Apparently not.]
>
> * Pero Peric <zlac53@...> [2007-05-13 01:30]:
> > Client has loaded the home page and therefore client can
> > see the REPRESENTATION of the CURRENT STATE of APPLICATION.
>
> I'd call this the initial state.
>
> > Well, am I getting it right?
>
> It looks right. But you seem to be thinking of "state" as the
> current state of the server -- when actually the state is on
> the client (and only on the client).
>
> Regards,
> --
> Aristotle Pagaltzis // <http://plasmasturm.org/>
>
Hi, thanks for the info.
First I have to apologize for the mix-up with names - my name is not
Pero, my name is Miran. When I subscribed to the REST group, I simply
used my old, rarely used, anonymous backup mail account at yahoo.com -
hence the fake name. I created profile to use with this group,
thinking (obviously, wrong), that system would pickup my real name
from the profile, but obviously it didn't. It should be ok from now on.
Well, back to the business :-)
Yes, it slipped from my mind that interactions are completely
stateless and that the client has to provide complete state with every
request sent to the server. So the original sentences
"By means of that URI and selected OPERATION derived from UNIFORM
INTERFACE (GET, PUT, ...) that RESOURCE is being manipulated with,
meaning that ANY number of ACTIONS are being initiated "behind the
scenes" on server. Those ACTIONS will bring APPLICATION to NEW STATE
and our client will receive the REPRESENTATION of some RESOURCE which
describes that NEW (but now CURRENT) STATE of APPLICATION."
would be more accurate when written like this:
"By means of that URI and selected OPERATION derived from UNIFORM
INTERFACE (GET, PUT, ...) that RESOURCE is being manipulated with,
meaning that ANY number of ACTIONS are being initiated "behind the
scenes" on server. These ACTIONS will result in "shifting
APPLICATION'S focus" (blah, there has to be a better term) from
originally targeted RESOURCE to another RESOURCE (by
creating/updating/retrieving that another RESOURCE ?), and our client
will receive the REPRESENTATION of that RESOURCE, thus being
TRANSFERED (the client) to a NEW STATE."
As you can see, I'm a bit confused about what's actually happening on
the server and how that fits in REST's boundaries. That "shifting of
focus" is basically a lame statement trying to express that there is
some kind of transition of state happening on the server too - we do
have some application whose back-end processes are implemented in some
programming language. When a request (URI+operation) hits a resource,
some of those back-end processes are being triggered, resulting
effectively in "some kind of change of the STATE" - state of those
back-end processes (but obviously that's another kind of state, not
the state that REST is all about). When these processes are completed,
we have as the result that another RESTful RESOURCE, whose
representation will be delivered to the client as server's response to
the initial request.
Basically, when trying to think in the REST way, I have to forget
about all the back-end processes and activities that are triggered by
requests? Those are actually outside of REST's scope? Request issued
upon some resource will trigger some actions, those actions will
produce another resource, but those actions and all of the related
internals are beyond REST (i.e. it doesn't matter how it's done)?
Does that make sense :-) ?
Best regards,
Miran
Hi Chad, * Chad Burt <chad@...> [2007-05-14 20:20]: > So basically my post looking for a URI naming convention that > allows arbitrary arguments is orthogonal to this mailing list! indeed. > This could be a common mistake as the rails community moves > towards RESTful practices. It *is* a common mistake. I’ve seen it a lot, and Rails is far from the only community making this mistake. * Berend de Boer <berend@...> [2007-05-14 21:50]: > I don't believe someone who considers URL to be just opaque > strings will have a good chance of developing a good REST > application. I don’t believe someone considers URIs meaningful will have a good chance of developing a good REST application. Considering the URI meaningful may lead them to build an interface wherein the client has to *construct* URIs – which isn’t RESTful, however much it may be resource-centric. Of course, the URI is meaningful to the *server*, and the server may *parse* the URI in order to extract parameters from it – but that’s for the server to know and none of the client’t business. But the client *must* consider URIs opaque and just follow them. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote: > Of course, the URI is meaningful to the *server*, and the server > may *parse* the URI in order to extract parameters from it – but > that’s for the server to know and none of the client’t business. > > But the client *must* consider URIs opaque and just follow them. That leaves forms in a gray place. cheers Bill
> I don't believe someone considers URIs meaningful will have a > good chance of developing a good REST application. > Considering the URI meaningful may lead them to build an > interface wherein the client has to *construct* URIs - which > isn't RESTful, however much it may be resource-centric. I believe you are mistaking cause and effect here. > Of course, the URI is meaningful to the *server*, and the > server may *parse* the URI in order to extract parameters > from it - but that's for the server to know and none of the > client't business. That's not true. It is the client's business if (but only if) the server publishes that information for the client's use. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
* Bill de hOra <bill@...> [2007-05-15 12:50]:
> A. Pagaltzis wrote:
>
> > Of course, the URI is meaningful to the *server*, and the server
> > may *parse* the URI in order to extract parameters from it – but
> > that’s for the server to know and none of the client’t business.
> >
> > But the client *must* consider URIs opaque and just follow them.
>
> That leaves forms in a gray place.
Only seemingly, though.[^1] This:
<form action="http://example.org/search" method="get">
<input type="text" name="q">
<input type="submit" value="Find">
<form>
is really just a complex link. The client does not need a priori
knowledge of the server URI space. It constructs (some parts of)
a URI, but:
It does so based on directions provided to it within a
representation returned from the server (and according to the
rules for query strings).
Therefore, this is still representational state transfer. This is
what Roy’s recent message boils down to:
* Roy T. Fielding <fielding@...> [2007-05-10 18:00]:
> It is essential to eliminate the coupling between client and
> server. If the application doesn't follow the workflow defined
> by the representations that are received, then the application
> isn't using the REST style. Not even a little bit. It is using
> RPC plus streaming, with a rather inefficient syntax, and the
> client will break each time the server's application evolves
> because the client must be anticipating the server's state
> based on its own assumptions. In other words, the two are
> coupled by their original design.
In a sense, forms are just a compression scheme. They can be used
to express an infinite amount of links with a single description
that encompasses all of them. But representational state transfer
is still happening because such a manifold link must still be
provided by the server to the client inside a representation.
Likewise, having the client construct URIs in general would be
OK, if this was happening based on a URI Template enclosed in a
representation previously retrieved by the client. It is _not_ OK
if it happens based on a priori knowledge of the client about the
server URI space, because then the client is coupled to the
server.
[^1] But I agree that I didn’t sufficiently qualify my previous
post to sufficiently cover forms.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
* Mike Schinkel <mikeschinkel@...> [2007-05-15 13:30]: > > I don't believe someone considers URIs meaningful will have a > > good chance of developing a good REST application. > > Considering the URI meaningful may lead them to build an > > interface wherein the client has to *construct* URIs - which > > isn't RESTful, however much it may be resource-centric. > > I believe you are mistaking cause and effect here. Or maybe we are just in violent agreement. > > Of course, the URI is meaningful to the *server*, and the > > server may *parse* the URI in order to extract parameters > > from it - but that's for the server to know and none of the > > client't business. > > That's not true. It is the client's business if (but only if) > the server publishes that information for the client's use. Yes. I just went into this at length in reply to Bill’s objection. Sorry for the incomplete exposition. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Chad Burt wrote:
> So basically my post looking for a URI naming convention that allows
> arbitrary arguments is orthogonal to this mailing list!
It is and it isn't. It's just as RESTful to have <http://example.net/1>
<http://example.net/2> <http://example.net/3> and so on for ever
resource in the application.
However, good URI design affects the "Coolness" of the URIs
<http://www.w3.org/Provider/Style/URI> which have
strong benefits including some that impact upon RESTfulness - in
particular while the same resource can be identified by more than one
URI (and it's often useful to do so) the opaqueness of URIs (and it's
not a matter that URIs "should" be opaque - URIs simply are opaque; some
applications may be interpretting them in certain ways, but the rest of
the web is unaware of this) means that using the same URI consistently
makes better use of caching, and better matching of URIs in other
contexts (in particular, if I have two different URIs I have to assume
they refer to different resources - though maybe they don't, if I have
two URIs that are both the same I know they refer to the same resource).
Good design in URIs also tends to indicate good design elsewhere. And
there are other advantages in good URIs; those advantages of coolness
that don't affect REST, search-engine advantages, user-hackability
(which is a matter of user's making guesses based on assumptions that
are NOT RESTful, but who cares if we aren't actually breaking REST to
allow for that). As long as a particular URI design isn't at odds with
REST, then certainly do make them as good as you can.
Where a URI design is at odds with REST, and hence not "good design" for
our purposes, is where it does something that REST says a URI doesn't do.
If you have any information in a URI that is not identifying a resource
(e.g. session information, user-ids, etc) then you are breaking from
REST and losing its advantages.
If you require a client application to construct a URI then you are
breaking from REST unless the client receives information about how to
construct that URI from another resource's representation (e.g. HTML
forms with method="get" are instructions to a web browser as to how to
construct a URI based on user input, and hence perfectly compliant with
REST).
It's not wrong therefore to, for example, build a URI like
<http://example.net/grandparent/parent/child.resource> but it is wrong
for a client acting on a representation of child.resource to assume that
<http://example.net/grandparent/parent/> is the URI of a parent resource
- rather it must receive a link to that URI. Hardcoding <parent
xlink:href="../" /> into the code to build a representation of
child.resource might do that perfectly well - hardcoding on the server
and softcoding on the server are the same thing as far as the client can
see.
You generally need to have some sort of "seed" URI to tell a client
where to start - this cannot be prevented.
Beyond that to be perfectly RESTful all URIs must be obtained from
representations of other resources. Links (including relative links) are
the main mechanism, but instructions on how to construct a URI are also
valid - HTML forms is an example, but any other mechanism could be used,
e.g. we could have something like the following:
<constructionPattern pattern="http://example.net/{parent}/{child}/"/>
Not from any standard, possibly flawed (I just made it up there now) but
capable of informing the construction a URI to anything that knows what
"parent" and "child" mean in this context, and completely not bound to
any given URI pattern - if we changed the way the server deals with this
sort of request we could change the above to be, e.g.:
<constructionPattern
pattern="http://example.net/childMatrix?parent={parent};child={child}"/>
<constructionPattern
pattern="http://example.net/weDontNeedTheParentAnyway/{child}"/>
or whatever.
> I think I got confused because I was coming at REST from the perspective
> of how it is implemented in rails.
REST isn't implemented in rails. REST is USED in rails.
REST is also used in rails in a rails "hello world" webpage, and can be
used in rails outside of those things explicitly labelled as REST.
On 15 May 2007, at 13:57, Jon Hanna wrote: > > You generally need to have some sort of "seed" URI to tell a client > where to start - this cannot be prevented. This is an interesting point. I try to follow this principle in my application. On the seed URI you get a 'capabilities' document that basically has links to the main resource collections. The application in question is a workflow job submission system with a rich client (not browser). Say you GET /: <capabilities> <users xlink:href="users" /> <queues xlink:href="queues" /> <workers xlink:href="workers" /> </capabilities> In my design for the client everything is focused around the user, similar to the personal shopping basket in several REST examples. So a GET /users/stain: <user> <username>stain</username> <workflows xlink:href="/users/stain/workflows"> <jobs xlink:href="/usesr/stain/jobs"> </user> From here the client knows where to go to post new jobs, etc. My problem is how to get that '/user/stain' address from the seed address. The username is the same as in the HTTP basic authentication (which is required for basically everything), but I still feel it's not really RESTful to on the client construct the address from the capabilities->users element at / and the HTTP basic-auth username, or to include a <currentUser xlink:href="users/stain" /> element in the capabilities document. Or is it? I could use /users/stain as the seed URI instead of /. Registration (POST to /users) sends a Location in the return, so a clever client could easily store this URI, but this still makes it difficult for new clients when users have already registered. Users would type in the root URI of the service, their username and password. (If they are told in documentation that "their" URI is /users/YOURUSERNAME this is basically the same as letting the client construct this URI) Should I have a POST /users;current or similar RPC-ish hacks? -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
Stian Soiland wrote: > From here the client knows where to go to post new jobs, etc. My > problem is how to get that '/user/stain' address from the seed > address. The username is the same as in the HTTP basic authentication > (which is required for basically everything), but I still feel it's > not really RESTful to on the client construct the address from the > capabilities->users element at / and the HTTP basic-auth username, or > to include a <currentUser xlink:href="users/stain" /> element in the > capabilities document. Or is it? It's RESTful to return a different representation based on the content of headers, as long as you say that you're doing so (through the Vary header), so putting <currentUser xlink:href="users/stain" /> doesn't break with REST as long as you include Authorization in the Vary header. There are advantages in not doing so, particularly with regard to cachability (since most caches will regard anything with a Vary header as non-cachable, and even a "perfect" cache will of course only be able to send you the cached version if it is in response to the same Authorization header. Therefore to minimise the impact of this I'd recommend always sending <currentUser xlink:href="currentUser" /> and having that redirecting the the current user on the basis of the Authorization header - with appropriate cache and vary headers. Then only that redirect response suffers from the caching problems.
* Jon Hanna <jon@...> [2007-05-15 15:05]:
> e.g. we could have something like the following:
>
> <constructionPattern pattern="http://example.net/{parent}/{child}/"/>
>
> Not from any standard, possibly flawed (I just made it up there
> now)
But it’s exactly the syntax of URI Templates:
http://bitworking.org/projects/URI-Templates/draft-gregorio-uritemplate-00.html
Hopefully it will actually make it to RFC. I wonder what its
current status is…
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote:
> * Jon Hanna <jon@...> [2007-05-15 15:05]:
>> e.g. we could have something like the following:
>>
>> <constructionPattern pattern="http://example.net/{parent}/{child}/"/>
>>
>> Not from any standard, possibly flawed (I just made it up there
>> now)
>
> But it’s exactly the syntax of URI Templates:
>
> http://bitworking.org/projects/URI-Templates/draft-gregorio-uritemplate-00.html
Okay, in that case I'll change my comment to "not from any standard,
though it does happen to match a spec, and therefore hopefully not at
all flawed". :)
I still haven't given any thought to whether the above has any serious
issues or not though, so it's a very different thing for Joe to suggest
it and then spend time working out the fine details, examining it for
edge-cases, security concerns, and so on and me to write a pseudo-format
as an example in two seconds.
* Stian Soiland <ssoiland@...> [2007-05-15 17:00]: > The username is the same as in the HTTP basic authentication > (which is required for basically everything), but I still feel > it's not really RESTful to on the client construct the address > from the capabilities->users element at / and the HTTP > basic-auth username, or to include a <currentUser > xlink:href="users/stain" /> element in the capabilities > document. Or is it? > > I could use /users/stain as the seed URI instead of /. > Registration (POST to /users) sends a Location in the return, > so a clever client could easily store this URI, but this still > makes it difficult for new clients when users have already > registered. Users would type in the root URI of the service, > their username and password. And at that point, I’d issue a temporary redirect to /users/stain. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Yet again, we have the "URIs must be opaque for REST". URIs can be non-opaque and client constructed under a wide variety of conditions, typically when the server authorizes such treatment. There's a TAG finding on metadata in URIs that's quite related. URI naming conventions that allow arbitrary arguments is quite within the scope of this list. Cheers, Dave _____ From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of A. Pagaltzis Sent: Tuesday, May 15, 2007 1:48 AM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: REST and URI naming conventions Hi Chad, * Chad Burt <chad@underbluewater <mailto:chad%40underbluewaters.net> s.net> [2007-05-14 20:20]: > So basically my post looking for a URI naming convention that > allows arbitrary arguments is orthogonal to this mailing list! indeed. > This could be a common mistake as the rails community moves > towards RESTful practices. It *is* a common mistake. I've seen it a lot, and Rails is far from the only community making this mistake. * Berend de Boer <berend@pobox. <mailto:berend%40pobox.com> com> [2007-05-14 21:50]: > I don't believe someone who considers URL to be just opaque > strings will have a good chance of developing a good REST > application. I don't believe someone considers URIs meaningful will have a good chance of developing a good REST application. Considering the URI meaningful may lead them to build an interface wherein the client has to *construct* URIs - which isn't RESTful, however much it may be resource-centric. Of course, the URI is meaningful to the *server*, and the server may *parse* the URI in order to extract parameters from it - but that's for the server to know and none of the client't business. But the client *must* consider URIs opaque and just follow them. Regards, -- Aristotle Pagaltzis // <http://plasmasturm. <http://plasmasturm.org/> org/>
A. Pagaltzis wrote: > > > * Bill de hOra <bill@... <mailto:bill%40dehora.net>> [2007-05-15 > > That leaves forms in a gray place. > > Only seemingly, though.[^1] This: > > <form action="http://example.org/search <http://example.org/search>" > method="get"> > <input type="text" name="q"> > <input type="submit" value="Find"> > <form> > > is really just a complex link. The client does not need a priori > knowledge of the server URI space. It constructs (some parts of) > a URI, but: A computation is needed to create the link. "Complex" is a word game I don't buy into here; on that basis EPRs are complex too. > It does so based on directions provided to it within a > representation returned from the server (and according to the > rules for query strings). Right, I need a preprocessor. I'm not getting how a form is only seemingly opaque. If you're saying it depends on the media type, then that's fine, but stronger claims about URI opacity need to be made contingent on that. We haven't even talked about URI templates yet. > Therefore, this is still representational state transfer. This is > what Roy’s recent message boils down to: > > * Roy T. Fielding <fielding@... <mailto:fielding%40gbiv.com>> > [2007-05-10 18:00]: > > It is essential to eliminate the coupling between client and > > server. If the application doesn't follow the workflow defined > > by the representations that are received, then the application > > isn't using the REST style. Not even a little bit. It is using > > RPC plus streaming, with a rather inefficient syntax, and the > > client will break each time the server's application evolves > > because the client must be anticipating the server's state > > based on its own assumptions. In other words, the two are > > coupled by their original design. > > In a sense, forms are just a compression scheme. They can be used > to express an infinite amount of links with a single description > that encompasses all of them. But representational state transfer > is still happening because such a manifold link must still be > provided by the server to the client inside a representation. Sorry, but I suspect this ends up with an infinite tape. cheers Bill
Interesting new book -- I haven't seen any other REST how-tos like this for other languages. -enp http://www.rubyinside.com/rails-refactoring-by-trotter-cashion-494.html Rails Refactoring to Resources (Digital Short Cut): Using CRUD and REST in Your Rails Application Rails Refactoring is an e-book written by Trotter Cashion (of MotionBox) and published by Addison-Wesley. Targeting developers who are tentatively dipping a toe into the world of REST, Rails Refactoring looks at how to turn your old-fashion unRESTian Rails code into the modern REST-capable equivalent. http://www.awprofessional.com/bookstore/product.asp? isbn=0321501748&rl=1&rl=1
"RESTful Web Services" By Leonard Richardson, Sam Ruby http://www.oreilly.com/catalog/9780596529260/ :-) Alan Dean http://thoughtpad.net/alan-dean
Mark Baker wrote: > The point is that you can't interpret the initial PUT request as "set > state=off and temp=0". It just means "state=off", and the server is > free to do what it wants with "temp". At least, that's how I > understand it. We've been through this in APP land recently without coming to consensus. My continued belief (not shared by all) is that the specified PUT request means state="off" and there is no temp value. If the server doesn't like that, it should generate an HTTP 40x error and refuse the request rather than trying to error correct. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 5/15/07, Elliotte Harold <elharo@...> wrote: > > We've been through this in APP land recently without coming to > consensus. My continued belief (not shared by all) is that the specified > PUT request means state="off" and there is no temp value. I was thinking about more about this, and seems like an error, or at least sloppy, to inentionally spec PUT this way, because the meaning of the client's message is not clear. The MS stuff is really a patch format according to the docs. If the document <foo> <a>1</a> <b>2</b> <c>3</c> </foo> is retrieved from http://example.com/bar, it looks like you can update /b and /c in two ways. PUT /bar ... <foo> <a>10</a> <c>30</c> </foo> or PUT /bar/a ... 10 PUT /bar/c ... 30 There's a problem here, in that it becomes impossible to clearly request that an HTTP server save your patch somewhere, in case you don't want to apply it immediately. To ensure that you don't get bogus changes inserted in your patch, you need to wrap it, like <patch> <foo> <a>10</a> <c>30</c> </foo> </patch> and then the question becomes... why isn't the patch format like this in the first place, and sent using either PATCH or POST? -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
> * Jon Hanna <jon@...> [2007-05-15 15:05]:
> > e.g. we could have something like the following:
> >
> > <constructionPattern
> pattern="http://example.net/{parent}/{child}/"/>
> >
> > Not from any standard, possibly flawed (I just made it up there
> > now)
>
> But it's exactly the syntax of URI Templates:
>
> http://bitworking.org/projects/URI-Templates/draft-gregorio-ur
> itemplate-00.html
>
> Hopefully it will actually make it to RFC. I wonder what its
> current status is.
It seems to have stalled. And that's a huge shame because having be a
recognized standard would be hugely valuable.
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
"It never ceases to amaze how many people will proactively debate away
attempts to improve the web..."
Mike Schinkel wrote: > > Hopefully it will actually make it to RFC. I wonder what its > > current status is. > > It seems to have stalled. And that's a huge shame because having be a > recognized standard would be hugely valuable. In particular as it is being used both by Microsoft and Sun in their next-gen frameworks... Best regards, Julian
On 15 May 2007, at 16:21, A. Pagaltzis wrote: >> I could use /users/stain as the seed URI instead of /. >> Registration (POST to /users) sends a Location in the return, >> so a clever client could easily store this URI, but this still >> makes it difficult for new clients when users have already >> registered. Users would type in the root URI of the service, >> their username and password. > > And at that point, Id issue a temporary redirect to > /users/stain. As a response to which request? To GET / ? (Which I don't want to require authentication) I would then have to have my capabilities document linked from the user document - although possible to achieve 'single point of entry' I can't see what the capabilities has to do with the user resource.. -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
Andrzej Jan Taramina wrote: > I would pay good coin to have a book by Roy Fielding in my library +1 /Roger
--- In rest-discuss@yahoogroups.com, "Miran Z." <zlac53@...> wrote:
> Yes, it slipped from my mind that interactions are completely
> stateless and that the client has to provide complete state with every
> request sent to the server.
<cut>
> "By means of that URI and selected OPERATION derived from UNIFORM
> INTERFACE (GET, PUT, ...) that RESOURCE is being manipulated with,
> meaning that ANY number of ACTIONS are being initiated "behind the
> scenes" on server. These ACTIONS will result in "shifting
> APPLICATION'S focus" (blah, there has to be a better term) from
> originally targeted RESOURCE to another RESOURCE (by
> creating/updating/retrieving that another RESOURCE ?), and our client
> will receive the REPRESENTATION of that RESOURCE, thus being
> TRANSFERED (the client) to a NEW STATE."
>
> As you can see, I'm a bit confused about what's actually happening on
> the server and how that fits in REST's boundaries. That "shifting of
> focus" is basically a lame statement trying to express that there is
> some kind of transition of state happening on the server too - we do
> have some application whose back-end processes are implemented in some
> programming language. When a request (URI+operation) hits a resource,
> some of those back-end processes are being triggered, resulting
> effectively in "some kind of change of the STATE" - state of those
> back-end processes (but obviously that's another kind of state, not
> the state that REST is all about). When these processes are completed,
> we have as the result that another RESTful RESOURCE, whose
> representation will be delivered to the client as server's response to
> the initial request.
>
Well, looks like there exists that other "kind of state change" - the
much anticipated book "RESTful Web Services" (preview chapter,
http://www.oreilly.com/catalog/9780596529260/chapter/ch04.pdf, page
15) shed some light - it makes clear distinction between "application
state" (living on client) and "resource state" (living on server and
being identical for all clients). And it seems that both are equally
important to REST.
Any thoughts/comments/suggestions?
Regards,
Miran
* Stian Soiland <ssoiland@...> [2007-05-16 12:55]: > On 15 May 2007, at 16:21, A. Pagaltzis wrote: >>>I could use /users/stain as the seed URI instead of /. >>>Registration (POST to /users) sends a Location in the return, >>>so a clever client could easily store this URI, but this still >>>makes it difficult for new clients when users have already >>>registered. Users would type in the root URI of the service, >>>their username and password. >> >>And at that point, I’d issue a temporary redirect to >>/users/stain. > > As a response to which request? To GET / ? (Which I don't want > to require authentication) As a response to whichever URI in the sequence is the first one to require authentication. Instead of returning a page in response to a successfully authenticated request for that URI, send a redirect. This should indeed not be in response to GET /, since that would make / vary with authentication and therefore make it uncachable. > I would then have to have my capabilities document linked from > the user document - although possible to achieve 'single point > of entry' I can't see what the capabilities has to do with the > user resource.. I’m not following. Does /user/stain have to be something other than the capabilities document? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Miran Z. <zlac53@...> [2007-05-16 13:50]: > Well, looks like there exists that other "kind of state change" > - the much anticipated book "RESTful Web Services" (preview > chapter, > http://www.oreilly.com/catalog/9780596529260/chapter/ch04.pdf, > page 15) shed some light - it makes clear distinction between > "application state" (living on client) and "resource state" > (living on server and being identical for all clients). And it > seems that both are equally important to REST. > > Any thoughts/comments/suggestions? I am still trying to get a grip on my thinking, but the point is that state means a different thing for the server than for the client. The client may keep implicit state that persists across requests, but the server may not. State on the server is always explicit, exposed via resources. That’s not a satisfactory explanation, but my understanding is not yet coherent enough to express it much better than that. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote: > I am still trying to get a grip on my thinking, but the point is > that state means a different thing for the server than for the > client. The client may keep implicit state that persists across > requests, but the server may not. State on the server is always > explicit, exposed via resources. Both the server and the client will have state. Connections do not have state. The server will not make any assumptions about the client's state based on previous operations. The client will not make any assumptions about the server's state based on previous operations (though checking that a previous known state is still valid may only require a cache look up - conceptually the client has still checked with the server).
On 16 May 2007, at 12:54, A. Pagaltzis wrote:
> As a response to whichever URI in the sequence is the first one
> to require authentication. Instead of returning a page in
> response to a successfully authenticated request for that URI,
> send a redirect.
So this would be the 'current user' URI then.
> Im not following. Does /user/stain have to be something other
> than the capabilities document?
Maybe my description was not self-describing.. My idea was that at /
user/stain you would get the user document which describes the user
and links to the user's different personal collections:
GET /users/stain (authenticated as stain)
<user>
<username>stain</username>
<workflows xlink:href="/users/stain/workflows" />
<jobs xlink:href="/users/stain/jobs" />
</user>
While the capabilities document (maybe not the best name) would
describe the overall site structures. I want to serve this at / so
that the client can see where to POST to register (/users), where
the list of queues are (/queues) etc. Basically everything except
POSTing a new user requires authentication.
I am therefore now thinking that I'll go for Jon's suggestion with a
slight variation on the URI for the current user:
GET / (without auth)
<capabilites>
<users xlink:href="/users" />
<currentUser xlink:href="/users;current" />
<queues xlink:href="/queues" >
</capabiltiles>
The current user link could also be shown in the collection document
at /users - might be cleaner, although it should also be an OK link
from 'capabilities' as I don't want to show the list of users to
unauthorised users - possibly not for normal users neither.
I'm open for suggestion for a better name than 'capabilities' :-)
GET /users;current (without auth)
401 Auth required
Vary: Authorization (Should it be included here?)
GET /users;current (with auth "stain")
302 (Found) moved Temporarily
Vary: Authorization
Location: /users/stain
(For the curious, here's the registration)
POST /users
<user>
<username>stain</username>
<password>fish</password>
</user>
201 Created
Location: /users/stain
(or 409 Conflict if the username was already taken.. I was thinking
that this could also be done through PUT /users/stain - but then how
should I specify that URI template?)
--
Stian Soiland, myGrid team
School of Computer Science
The University of Manchester
http://www.cs.man.ac.uk/~ssoiland/
i have a question of my own.
i read the sample chapter. here is a quote:
A web service only
needs to care about your application state when you’re actually
making a request. The rest of the time, it doesn’t even know you exist.
This means that
whenever a client makes a request, it must include all the application
states the server
will need to process it. The server might send back a page with links,
telling the client
about other requests it might want to make in the future, but then it
can forget all about
the client until the next request. That’s what I mean when I say a web
service should
be “stateless.” The client should be in charge of managing its own path
through the
application
so let's take the case of the shopping cart. according to the above, there shouldn't be any shopping cart resource. as a client, i just browse a web site for items i want, then post them all, in one go, to http://example.com/checkout. am i right?
thanks,
ittay
Miran Z. wrote on
05/16/07 14:44:
--- In rest-discuss@
yahoogroups. , "Miran Z."com wrote:
> Yes, it slipped from my mind that interactions are completely
> stateless and that the client has to provide complete state with every
> request sent to the server.
> "By means of that URI and selected OPERATION derived from UNIFORM
> INTERFACE (GET, PUT, ...) that RESOURCE is being manipulated with,
> meaning that ANY number of ACTIONS are being initiated "behind the
> scenes" on server. These ACTIONS will result in "shifting
> APPLICATION'S focus" (blah, there has to be a better term) from
> originally targeted RESOURCE to another RESOURCE (by
> creating/updating/retrieving that another RESOURCE ?), and our client
> will receive the REPRESENTATION of that RESOURCE, thus being
> TRANSFERED (the client) to a NEW STATE."
>
> As you can see, I'm a bit confused about what's actually happening on
> the server and how that fits in REST's boundaries. That "shifting of
> focus" is basically a lame statement trying to express that there is
> some kind of transition of state happening on the server too - we do
> have some application whose back-end processes are implemented in some
> programming language. When a request (URI+operation) hits a resource,
> some of those back-end processes are being triggered, resulting
> effectively in "some kind of change of the STATE" - state of those
> back-end processes (but obviously that's another kind of state, not
> the state that REST is all about). When these processes are completed,
> we have as the result that another RESTful RESOURCE, whose
> representation will be delivered to the client as server's response to
> the initial request.
>
Well, looks like there exists that other "kind of state change" - the
much anticipated book "RESTful Web Services" (preview chapter,
http://www.oreilly.com/catalog/ page9780596529260/ chapter/ch04. pdf,
15) shed some light - it makes clear distinction between "application
state" (living on client) and "resource state" (living on server and
being identical for all clients). And it seems that both are equally
important to REST.
Any thoughts/comments/suggestions?
Regards,
Miran
Stian Soiland wrote: > GET /users;current (without auth) > 401 Auth required > Vary: Authorization (Should it be included here?) No need to include the Vary here, since 401 responses can never be cached. > GET /users;current (with auth "stain") > 302 (Found) moved Temporarily > Vary: Authorization > Location: /users/stain Contrarywise, I'd probably take a belt and braces approach here because while 302 can only be cached if there's explicit headers allowing for this, the fact that it can be cached in some cases increases the risk of it being cached in cases where it's inappropriate, so I'd add headers explicitly denying caching. - Not a matter of either the idea nor of the HTTP spec, but one of general paranoia.
Ittay Dror wrote: > so let's take the case of the shopping cart. according to the above, > there shouldn't be any shopping cart resource. as a client, i just > browse a web site for items i want, then post them all, in one go, to > http://example.com/checkout. am i right? A server could have a bunch of shopping carts. It could know that certain authentication parameters (whether RFC2617 username and username/realm/password hash, or whatever) is needed to access the cart. It could know a lot of things about how that shopping cart relates to other resources. None of these things require it to know anything about any state held in any webbrowser or other client.
* Bill de hOra <bill@...> [2007-05-15 20:30]: > A. Pagaltzis wrote: > > * Bill de hOra <bill@... <mailto:bill%40dehora.net>> [2007-05-15 > > > That leaves forms in a gray place. > > > > Only seemingly, though.[^1] This: > > > > <form action="http://example.org/search <http://example.org/search>" > > method="get"> > > <input type="text" name="q"> > > <input type="submit" value="Find"> > > <form> > > > > is really just a complex link. The client does not need a > > priori knowledge of the server URI space. It constructs (some > > parts of) a URI, but: > > A computation is needed to create the link. "Complex" is a word > game I don't buy into here; on that basis EPRs are complex too. Yeah, that wording was mistaken. > > It does so based on directions provided to it within a > > representation returned from the server (and according to the > > rules for query strings). > > Right, I need a preprocessor. I'm not getting how a form is > only seemingly opaque. I was saying it is only seemingly transparent, not seemingly opaque. But what I should have been saying is that it is partially transparent. But again, the key point is that the transparency is granted by a description provided in a representation retrieved from the server. My key argument in that mail (and if I’m not mistaken you didn’t address this at all) is that the client does not have a priori knowledge of the server URI space. It knows how to construct such URIs only because the server publishes information about what it will accept in an agreed-upon format. > If you're saying it depends on the media type, then that's > fine, but stronger claims about URI opacity need to be made > contingent on that. We haven't even talked about URI templates > yet. They’re not qualitatively different from forms. > > Therefore, this is still representational state transfer. > > This is what Roy’s recent message boils down to: > > > > * Roy T. Fielding <fielding@...> [2007-05-10 18:00]: > > > It is essential to eliminate the coupling between client > > > and server. If the application doesn't follow the workflow > > > defined by the representations that are received, then the > > > application isn't using the REST style. Not even a little > > > bit. It is using RPC plus streaming, with a rather > > > inefficient syntax, and the client will break each time the > > > server's application evolves because the client must be > > > anticipating the server's state based on its own > > > assumptions. In other words, the two are coupled by their > > > original design. > > > > In a sense, forms are just a compression scheme. They can be > > used to express an infinite amount of links with a single > > description that encompasses all of them. But > > representational state transfer is still happening because > > such a manifold link must still be provided by the server to > > the client inside a representation. > > Sorry, but I suspect this ends up with an infinite tape. Come again? In this paragraph, I wasn’t saying anything different from the key argument above. I admit that it’s probably hard to follow that this is just another version of the same concept if you’re not me in that moment of time. Sorry. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Bill de hOra wrote: > A. Pagaltzis wrote: >> >> * Bill de hOra <bill@... <mailto:bill%40dehora.net>> [2007-05-15 >> > That leaves forms in a gray place. >> >> Only seemingly, though.[^1] This: >> >> <form action="http://example.org/search <http://example.org/search>" >> method="get"> >> <input type="text" name="q"> >> <input type="submit" value="Find"> >> <form> >> >> is really just a complex link. The client does not need a priori >> knowledge of the server URI space. It constructs (some parts of) >> a URI, but: > > A computation is needed to create the link. "Complex" is a word game I > don't buy into here; on that basis EPRs are complex too. > >> It does so based on directions provided to it within a >> representation returned from the server (and according to the >> rules for query strings). > > Right, I need a preprocessor. I'm not getting how a form is only > seemingly opaque. If you're saying it depends on the media type, then > that's fine, but stronger claims about URI opacity need to be made > contingent on that. We haven't even talked about URI templates yet. I don't think complexity is the issue, decoupling is. <a href="http://example.net/goHere?some=thing">This is a link</a> needs processing too, and depends on the media type also. <form action="http://example.net/goHere" method="get"> <p><input type="hidden" name="some" value="thing" /> <input type="submit" value="This is a form" /></p> </form> May need more complicated processing, but it is still much the same step from representation to HTTP request, and with exactly the same results. It is still decoupled - changing the representation FROM the server will change the request made TO the server. The state of the client is dependant upon the server, the state of the server is independent. It's not in any grey place, it's REST. <form action="http://example.net/goHere" method="get"> <p><input type="text" name="some" /> <input type="submit" value="This is a form" /></p> </form> Now allows for the user to affect the URI, but in a way still determined by the representation from the server. Likewise <script type="text/javascript"> document.location='http://example.net/goHere?some=thing'; </script> Even: <someUriBuildingMarkup> <processPutsThingHere name="some"/> <baseUri xlink:href="http://example.net/goHere"/> </someUriBuildingMarkup> No matter how much prior knowledge is needed to work out what is done with the above to build the URI, it remains the representation that is controlling how it is built up. The markup is therefore hypertext (or hypermedia at last), hypertext is therefore the engine of application state, therefore we have that REST constraint met. "youKnowWhatToDo" isn't hypermedia but "youKnowWhatToDo|http://example/goHere|some" could be.
Ittay Dror <ittayd@...> writes: > according to the above, there shouldn't be any shopping cart > resource. The quoted text does not preclude the existence of a personalised shopping cart resource that exists across requests whose state is kept by server. > as a client, i just browse a web site for items i want, > then post them all, in one go, to http://example.com/ checkout. You would also want to reserve what you have picked out immediately. Each 'Buy Me' link could be a POST to a shopping cart resource, say, http://example.com/users/alice/shoppingcart, to incrementally accumulate purchases by changing shopping cart resource's server-side state each time. So, when checking out, you could get the current cart state from the server to present to the user. Then the cart state could be sent back to the server so there is no confusion as to which cart state to check out: the 'current' one which may have changed outside of the user's knowledge, or the one that the user saw. If you expect the cart to hold numerous items such that the bandwidth cost of shipping the item list back and forth is prohibitive, you could expose the shopping cart as a versioned resource. Then you can checkout a specific version of the cart. > am i right? What should be stateless is the interaction. I like to think of it as context-less interaction where everything has to be explicitly stated. A delete request must state exactly what to delete rather than having the server assuming the last item looked. YS.
Thanks for the pointer. There is the HTTP vocabulary from the w3c http://www.w3.org/TR/HTTP-in-RDF/ but that is probably too low level. Kind of cool though. And there is the Web Arch from nokia: http://www.schemaweb.info/schema/SchemaDetails.aspx?id=82 There may be others. Henry On 13 May 2007, at 06:01, Laurian Gridinoc wrote: > On 12/05/07, Henry Story <henry.story@...> wrote: >> [...] >> [] :representationOf <StartPage>; >> :ownedBy <http://bblfish.net/people/henry/card#me>; >> .... > > representationOf reminds me of "Resource and Representation > Relationship Vocabulary" from > http://www.hackcraft.net/rep/rep.xml > > Did anyone else developed a vocabulary to address RESTful things? > > Cheers, > -- > Laurian Gridinoc, purl.org/net/laur
[ Attachment content not displayed ]
On 16 May 2007, at 14:56, Mike Dierken wrote: > Which application are you suggesting is badly architected? > RDF, because it can't describe something simple like a common start > page? RDF can describe the common start page. It's just that in order to do so you need a blank node, ie an unnamed resource to describe it. The need for blank nodes suggests a badly architected web application (if you are looking at it from the REST perspective of course). Whenever you have one resource that produced widely varying representations for each user, then it is an indication that there are a number of resources hiding behind that one resource. This is known to be the major criticism of SOAP btw. Henry > > > > As an example, try describing a web resource using RDF. If you find > it difficult to do, it's probably that the application is badly > architected. So for example on dev.java.net all users have the same > start page url > > https://www.dev.java.net/servlets/StartPage > > so what is that page really naming? How is one going to describe it ? > One needs to describe it using some blank node such as > > [] :representationOf <StartPage>; > :ownedBy <http://bblfish.net/people/henry/card#me>; > .... > > ie, there is no way to refer to the resource uniformly. The same is > true of xmlrpc or soap messages. Of course the correct way to set > this up would be to give every person their own start page > > @prefix sioc: <http://rdfs.org/sioc/ns# > > > <https://www.dev.java.net/people/bblfish> a sioc:User ; > sioc:email <mailto:henry.story@... > . > >
[ Attachment content not displayed ]
Jon Hanna wrote: > <form action="http://example.net/goHere" method="get"> > <p><input type="hidden" name="some" value="thing" /> > <input type="submit" value="This is a form" /></p> > </form> > > May need more complicated processing, but it is still much the same step > from representation to HTTP request, and with exactly the same results. > > It is still decoupled - changing the representation FROM the server will > change the request made TO the server. The state of the client is > dependant upon the server, the state of the server is independent. It's > not in any grey place, it's REST. > > [...] > > No matter how much prior knowledge is needed to work out what is done > with the above to build the URI, it remains the representation that is > controlling how it is built up. The markup is therefore hypertext (or > hypermedia at last), hypertext is therefore the engine of application > state, therefore we have that REST constraint met. > > "youKnowWhatToDo" isn't hypermedia but > "youKnowWhatToDo|http://example/goHere|some" could be. I'm not arguing against prior knowledge (see recent posts re self-description). I'm suggesting that a sorites paradox which denies a difference between URIs and EPRs* with forms somewhere in the middle of the heap, on the one hand, combined with the notion that URI opacity is non-contingent to the web architecture on the other, doesn't make sense. And we haven't talked about library lookup yet. cheers Bill * That EPR also stands for a paradox is quite cool.
Henry Story wrote: > As an example, try describing a web resource using RDF. If you find > it difficult to do, it's probably that the application is badly > architected. So for example on dev.java.net all users have the same > start page url > > https://www.dev.java.net/servlets/StartPage > <https://www.dev.java.net/servlets/StartPage> > > so what is that page really naming? How is one going to describe it ? > One needs to describe it using some blank node such as > > [] :representationOf <StartPage>; > :ownedBy <http://bblfish.net/people/henry/card#me > <http://bblfish.net/people/henry/card#me>>; > .... > > ie, there is no way to refer to the resource uniformly. The same is > true of xmlrpc or soap messages. Of course the correct way to set > this up would be to give every person their own start page > > @prefix sioc: <http://rdfs.org/sioc/ns# <http://rdfs.org/sioc/ns#>> > > <https://www.dev.java.net/people/bblfish > <https://www.dev.java.net/people/bblfish>> a sioc:User ; > sioc:email <mailto:henry.story@... > <mailto:henry.story%40bblfish.net>> . > > By this simple exercise I believe you can quickly spot and explain > the problem with badly architected web applications. I want to believe you've found an axiomatic means to describe controllers URLs, because I think controller URLs are broken, and transitively, any framework that encourages them, is broken.. However I suspect not being able to apply some presupposed predicate (via RDF) does not a bad URL make; the problem here could just as well be proper naming. cheers Bill
On 5/16/07, Henry Story <henry.story@...> wrote: > > On 16 May 2007, at 14:56, Mike Dierken wrote: > > > Which application are you suggesting is badly architected? > > RDF, because it can't describe something simple like a common start > > page? > > RDF can describe the common start page. It's just that in order to do > so you need a blank node, ie an > unnamed resource to describe it. The need for blank nodes suggests a > badly architected web application > (if you are looking at it from the REST perspective of course). > Whenever you have one resource that produced > widely varying representations for each user, then it is an > indication that there are a number of resources > hiding behind that one resource. This is known to be the major > criticism of SOAP btw. Unless my understanding of RDF is fundamentally flawed, the following can describe the root node of a domain: <rdf:RDF ...> <rdf:Description rdf:about="http://example.com/"> ... </rdf:Description> </rdf:RDF> Regards, Alan Dean http://thoughtpad.net/alan-dean ... and as we are mentioning RDF, see also: http://thoughtpad.net/alan-dean/http-in-rdf.html http://thoughtpad.net/alan-dean/rdf-cheatsheet.html
On 16 May 2007, at 15:07, Mike Dierken wrote:
> Well, I was just teasing, but since I don't know RDF, perhaps you
> can explain to me what the 'node' in RDF is meant to do or
> represent and why a blank one is needed?
> Why is it that you want to relate that resource to another via the
> 'ownedBy' property?
RDF is very simple relational schema where the subject, the relation
and the object of the relation are identified by URIs [1].
One can use a blank node for things one does not have a URI for. This
is equivalent to saying "there exists".
Now if you describe that start page you can say one thing clearly:
<https://www.dev.java.net/servlets/StartPage> a xxx:StartPage .
in english
<https://www.dev.java.net/servlets/StartPage> is a start page.
xxx being some name space where the definition of the class of
StartPages can be found.
The problem is that this start page returns information about the
user that has logged in, and is specific to that session. If you
wanted to describe that information you would have to say something like
<https://www.dev.java.net/servlets/StartPage> has a representation
that describes Henry, one that describes James, one that descibes
Jim, etc... But you cannot give those representations URLs (you could
give them some arbitrary URN of course). As a result you cannot mail
this information around to people, you cannot really use it to find
out who is who in a project on dev.java.net, etc, etc... Simply put:
all of these pages would be much better tied to a specific url.
The restful way to do this would be for the login page to redirect
you to
<https://developers.java.net/bblfish>
<https://developers.java.net/jag>,
<https://developers.java.net/jim>,
etc...
depending of course on who is logging in. Now you can talk about
those pages clearly using URIs (Universal Resource Identifiers) which
are also the URLs at which you can get the information.
If you want to play in a global information space, make sure your
resources can be identified Universally.
Henry
[1] see my recent presentation
http://blogs.sun.com/bblfish/entry/semantic_web_birds_of_a1
or the video available from
http://blogs.sun.com/bblfish/entry/
google_video_introduces_the_semantic
>
> On 5/16/07, Henry Story <henry.story@...> wrote:On 16 May
> 2007, at 14:56, Mike Dierken wrote:
>
> > Which application are you suggesting is badly architected?
> > RDF, because it can't describe something simple like a common start
> > page?
>
> RDF can describe the common start page. It's just that in order to do
> so you need a blank node, ie an
> unnamed resource to describe it. The need for blank nodes suggests a
> badly architected web application
> (if you are looking at it from the REST perspective of course).
> Whenever you have one resource that produced
> widely varying representations for each user, then it is an
> indication that there are a number of resources
> hiding behind that one resource. This is known to be the major
> criticism of SOAP btw.
>
>
> Henry
>
>
> >
> >
> >
> > As an example, try describing a web resource using RDF. If you find
> > it difficult to do, it's probably that the application is badly
> > architected. So for example on dev.java.net all users have the same
> > start page url
> >
> > https://www.dev.java.net/servlets/StartPage
> >
> > so what is that page really naming? How is one going to describe
> it ?
> > One needs to describe it using some blank node such as
> >
> > [] :representationOf <StartPage>;
> > :ownedBy < http://bblfish.net/people/henry/card#me>;
> > ....
> >
> > ie, there is no way to refer to the resource uniformly. The same is
> > true of xmlrpc or soap messages. Of course the correct way to set
> > this up would be to give every person their own start page
> >
> > @prefix sioc: <http://rdfs.org/sioc/ns# >
> >
> > < https://www.dev.java.net/people/bblfish> a sioc:User ;
> > sioc:email <mailto:henry.story@... > .
> >
> >
>
>
>
>
On 5/16/07, Bill de hOra <bill@...> wrote: > I want to believe you've found an axiomatic means to describe > controllers URLs, because I think controller URLs are broken, and > transitively, any framework that encourages them, is broken.. Bill, would you please explain what you mean by a controller URL, and why they are broken? I might be able to guess, but don't want to make assumptions. And my guess might include some resources that I think are useful.
Bob Haugen wrote: > > > On 5/16/07, Bill de hOra <bill@... <mailto:bill%40dehora.net>> wrote: > > I want to believe you've found an axiomatic means to describe > > controllers URLs, because I think controller URLs are broken, and > > transitively, any framework that encourages them, is broken.. > > Bill, would you please explain what you mean by a controller URL a URL whose /resource/ depends on who you are (or what your browser state is); operationally any design that requires you to go through a middleman to get to the resource: http://www.google.com/calendar/render?pli=1 is a working example, JIRA /browse URLs would be another (where that goes depends on the last project you looked at). In OO designs, these are like manager classes. I was about to send you RTE's tv listing, but they've had a site redesign and all the channels are addressable (at last). In the past clicking on a channels details kept you in the same URL due to the way they used frames. cheers Bill
Mike Dierken wrote: > > > Well, I was just teasing, but since I don't know RDF, perhaps you can > explain to me what the 'node' in RDF is meant to do or represent and why > a blank one is needed? > Why is it that you want to relate that resource to another via the > 'ownedBy' property? > > > On 5/16/07, *Henry Story* <henry.story@... > <mailto:henry.story@...>> wrote: > > On 16 May 2007, at 14:56, Mike Dierken wrote: > > > Which application are you suggesting is badly architected? > > RDF, because it can't describe something simple like a common start > > page? > > RDF can describe the common start page. It's just that in order to do > so you need a blank node, ie an > unnamed resource to describe it. So... blank nodes are a placeholder for causal names? That would be seem to be so, since many people think URIs are proper names, and this thinking is clearly influencing web architectural decisions (cf, the debacle around http-range and "information resources"). It's this kind of axiom bake in that makes me quite nervous about the semantic web. Assuming that a URL which requires a blank node is bad design, is an assumption that one theory of names counts more than another. Anyone familiar with the modern philosphy will know that naming theory remains *contentious*. Bad design then is a leap, not an implication. Even though I agree with you on the design issue here, it give me indigestion to use naming theory to support it. Why, just today, a colleague and I were talking about syndication in the context of a folder system. In that case, there was a folder called 'health' and it has a few direct child folders. All these folders have URLs. The Atom feed for the 'health' URL only shows changes to the folder's direct children. Q: should that feed show changes to child content further down, ie should the feed scope work transitively? A: in this case yes, but in general for folder such structures, it depends on the what the 'childof' relationship means. For example all categories are held in a root 'map' folder, but the relationship between it and its categories does not necessarily mean all category changes should show up in the map feed. Whereas the intent of the authors when they created the 'health' folder was that all children were to do with health, hence the health feed should show all changes. Give it 20 years of bake in. When the web's Wittgenstein turns up, the undoing create an entire industry. cheers Bill
[ Attachment content not displayed ]
A. Pagaltzis wrote: > > > * Miran Z. <zlac53@... <mailto:zlac53%40yahoo.com>> [2007-05-16 > > Any thoughts/comments/suggestions? > > I am still trying to get a grip on my thinking, but the point is > that state means a different thing for the server than for the > client. The client may keep implicit state that persists across > requests, but the server may not. State on the server is always > explicit, exposed via resources. It's a heck of lot better than HATEOAS. I'm adopting it. cheers Bill
On 5/16/07, Bill de hOra <bill@...> wrote: > Bob Haugen wrote: > > Bill, would you please explain what you mean by a controller URL > > a URL whose /resource/ depends on who you are (or what your browser > state is); operationally any design that requires you to go through a > middleman to get to the resource: Ok, thanks. I understand and probably agree. Will think about it more, anyway.
s = Server(args) doc = s.get_document(doc_url) doc = edit(doc) try: s.save(doc) except ServerException, e: log(e) I see this programming style a lot. I'm not sure what to make of it, but I suspect that letting developers pretend that servers are in the same address space as the client code results in problems; for example, the Server class can be supplied by the server owner leading to tight API coupling, despite documents travelling over HTTP . That said, I'm not sure what the sensible alternatives are, but I think the problems are independent of how you design your web application, ie, cool URLs won't necessarily help. cheers Bill
On 16 May 2007, at 15:59, Bill de hOra wrote:
> Mike Dierken wrote:
> >
> >
> > Well, I was just teasing, but since I don't know RDF, perhaps you
> can
> > explain to me what the 'node' in RDF is meant to do or represent
> and why
> > a blank one is needed?
> > Why is it that you want to relate that resource to another via the
> > 'ownedBy' property?
> >
> >
> > On 5/16/07, *Henry Story* <henry.story@...
> > <mailto:henry.story@...>> wrote:
> >
> > On 16 May 2007, at 14:56, Mike Dierken wrote:
> >
> > > Which application are you suggesting is badly architected?
> > > RDF, because it can't describe something simple like a common
> start
> > > page?
> >
> > RDF can describe the common start page. It's just that in order
> to do
> > so you need a blank node, ie an
> > unnamed resource to describe it.
>
> So... blank nodes are a placeholder for causal names? That would be
> seem
> to be so, since many people think URIs are proper names, and this
> thinking is clearly influencing web architectural decisions (cf, the
> debacle around http-range and "information resources").
URI are Universal Resource Identifiers. Nothing more nor less.
Speaking of causal names, I guess you are hinting at Kripke, but I
don't see the need to go into that space of debate, however
interesting that may be.
> It's this kind of axiom bake in that makes me quite nervous about the
> semantic web. Assuming that a URL which requires a blank node is bad
> design, is an assumption that one theory of names counts more than
> another.
Ok. Clearly I did not explain blank nodes clearly enough, as the
above does not make sense.
URLs don't require blank nodes, that is nonsense. Blank nodes are not
URLs by definition. They name, but have local scope, tied to the
document in which they find themselves.
So for example I could in this document describe you with the blank
node [1]
@prefix foaf: <http://xmlns.com/foaf/0.1/>
_:bill a foaf:Person .
_:bill foaf:name "Bill DeHora"
_:bill foaf:homePage <http://dehora.net/> .
_:bill foaf:blog <http://dehora.net/journal> .
perhaps one day you will create a foaf file and give us a URL to
identify you.
Then I can say
<http://dehora.net/p/bill#i> a foaf:Person .
<http://dehora.net/p/bill#i> foaf:name "Bill DeHora" .
<http://dehora.net/p/bill#i> foaf:homePage <http://dehora.net/> .
<http://dehora.net/p/bill#i> foaf:blog <http://dehora.net/journal> .
> Anyone familiar with the modern philosphy will know that naming
> theory remains *contentious*. Bad design then is a leap, not an
> implication. Even though I agree with you on the design issue here, it
> give me indigestion to use naming theory to support it.
The web is all about URIs (well URLs in particular). It's one of the
cornerstones of the web. The other is HTTP
used RESTfully. Sorry to say, but names are central to the web. They
may have been contentious in academic papers, where
people do set out to test ideas to the limit, but clearly they work
very well on the web. In any case for those who think they are
contentious I suggest explaining what is contentious about them :-) .
(not here please)
> Why, just today, a colleague and I were talking about syndication
> in the
> context of a folder system. In that case, there was a folder called
> 'health' and it has a few direct child folders. All these folders have
> URLs. The Atom feed for the 'health' URL only shows changes to the
> folder's direct children.
>
> Q: should that feed show changes to child content further down, ie
> should the feed scope work transitively?
>
> A: in this case yes, but in general for folder such structures, it
> depends on the what the 'childof' relationship means. For example all
> categories are held in a root 'map' folder, but the relationship
> between
> it and its categories does not necessarily mean all category changes
> should show up in the map feed. Whereas the intent of the authors when
> they created the 'health' folder was that all children were to do with
> health, hence the health feed should show all changes.
>
> Give it 20 years of bake in. When the web's Wittgenstein turns up, the
> undoing create an entire industry.
I think you are getting very opaque here. I like Wittgenstein, but
his later philosophy is acknowledged to be one of the most difficult
to read and understand.
I much preferred your definition of a controller URL:
"""
a URL whose /resource/ depends on who you are (or what your browser
state is); operationally any design that requires you to go through a
middleman to get to the resource:
"""
Nit picking a little: This needs to be rephrased with web
architecture in mind. A URI, and hence a URL, identifies a
*resource*. So the above sentence does not quite make sense, though
it is really close. The problem with the URLs we were discussing [2],
is that they seem to identify a set of resources (a set being a
thing, it can have a URL) and behaves as a controller, returning a
representation of one of the elements of the set depending on the
settings in your browser. Since none of the elements of that set have
been given URLs one needs blank nodes to describe them in RDF (and so
in any other language too).
The resources pointed to by such URLs are in fact as you point out,
hiding other things. They are not switching one to those things, like
a login page does, but really hiding them. Those things have no name
that one can use independently of the context of one's browsing
experience. One could invent a URN for them of course, but that would
not be very helpful for locating them.
In RDF the simplest way to name them then is using a blank node (an
existential quantifier, for those who have studied a little logic),
and relate those then to the resource which is their controller.
So using your terminology we can write:
<https://www.dev.java.net/servlets/StartPage> a dehora:Controller ;
dehora:hides [ a foaf:PersonalProfileDocument;
foaf:primaryTopic [ foaf:name "Henry Story";
foaf:mbox
<mailto:henry.story@...> ]
];
dehora:hides [ a foaf:PersonalProfileDocument;
foaf:primaryTopic [ foaf:name "James Gosling";
foaf:blog <http://blogs.sun.com/
jag/> ]
];
dehora:hides [ a foaf:PersonalProfileDocument;
foaf:primaryTopic [ foaf:name "Tim Bray";
foaf:blog <http://www.tbray.org/
ongoing/> ]
];
.
I like this. It makes the point more clearly than my previous attempt
to explain this by refering to representations.
Those '[' stand for blank nodes by the way.
So not only can we describe dehora:Controllers now, but we can see
why they are opaque.
On the web opacity is not a good thing, mostly.
>
> cheers
> Bill
[1] using N3 notation. RDF is not tied to xml. It's about semantics
see image at http://blogs.sun.com/bblfish/entry/
dropping_some_doap_into_netbeans
[2] You mentioned:
http://www.google.com/calendar/render?pli=1
I mentioned:
https://www.dev.java.net/servlets/StartPage
As this mailing list munges and spacing, thereby making the code I sent unreadable, I'll rewrite this more carefully <https://www.dev.java.net/servlets/StartPage> a dehora:Controller . <https://www.dev.java.net/servlets/StartPage> dehora:hides _:profile1 . <https://www.dev.java.net/servlets/StartPage> dehora:hides _:profile2 . <https://www.dev.java.net/servlets/StartPage> dehora:hides _:profile3 . _:profile1 a foaf:PersonalProfileDocument . _:profile1 foaf:primaryTopic _:p1 . _:profile2 a foaf:PersonalProfileDocument . _:profile2 foaf:primaryTopic _:p2 . _:profile3 a foaf:PersonalProfileDocument . _:profile3 foaf:primaryTopic _:p3 . _:p1 foaf:name "Henry Story" . _:p1 foaf:mbox <mailto:henry.story@...> . _:p2 foaf:name "James Gosling" . _:p2 foaf:blog <http://blogs.sun.com/jag/> . _:p3 foaf:name "Tim Bray" . _:p3 foaf:blog <http://www.tbray.org/ongoing/> . That should render better and make the subject relation object nature of rdf clearer. On 16 May 2007, at 16:58, Henry Story wrote: > So using your terminology we can write: > > <https://www.dev.java.net/servlets/StartPage> a dehora:Controller ; > dehora:hides [ a foaf:PersonalProfileDocument; > foaf:primaryTopic [ foaf:name "Henry Story"; > foaf:mbox > <mailto:henry.story@...> ] > ]; > dehora:hides [ a foaf:PersonalProfileDocument; > foaf:primaryTopic [ foaf:name "James Gosling"; > foaf:blog <http://blogs.sun.com/ > jag/> ] > ]; > dehora:hides [ a foaf:PersonalProfileDocument; > foaf:primaryTopic [ foaf:name "Tim Bray"; > foaf:blog <http://www.tbray.org/ > ongoing/> ] > ]; > . > > I like this. It makes the point more clearly than my previous attempt > to explain this by refering to representations. > Those '[' stand for blank nodes by the way. > > So not only can we describe dehora:Controllers now, but we can see > why they are opaque. > On the web opacity is not a good thing, mostly. > > > > > cheers > > Bill > > [1] using N3 notation. RDF is not tied to xml. It's about semantics > see image at http://blogs.sun.com/bblfish/entry/ > dropping_some_doap_into_netbeans > [2] You mentioned: > http://www.google.com/calendar/render?pli=1 > I mentioned: > https://www.dev.java.net/servlets/StartPage
Ittay Dror wrote: > > so let's take the case of the shopping cart. according to the above, > there shouldn't be any shopping cart resource. as a client, i just > browse a web site for items i want, then post them all, in one go, to > http://example.com/checkout. am i right? No, there can be a shopping cart resource on the server that has state and that has a URI. That's not the only way to do it, but it's one way. The key is that this shopping cart does have its own URI. The contents of the cart are not referenced by some sort of session token like a cookie that maps to the resource. The URI is the identifier, and different shopping carts have different URIs. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
A. Pagaltzis wrote: > Obviously if the intermediary saw anything other than a 2xx, it > couldn’t cache the request body. But even if it did see a 2xx > response, the semantics of PUT (namely, that the origin server > may do anything it wants with the request body) would seem to > absolutely preclude caching by intermediaries. That;s an arguable point. Some people think it's OK for the origin server to do anything it wants with a PUT body and still return a 200 OK. Some people think that's not acceptable. The semantics of PUT are not in consensus within the community. Given that some servers may change the body of the PUT because some developers believe that's OK, it is probably not safe for a cache to store a PUT body, whether that would be legal according to the spec or not. Perhaps cache-control headers in PUT responses could be used to clarify this? E.g. id the server is going to change the body it should send cache invalidating headers in the PUT response? -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Elliotte Harold wrote: > A. Pagaltzis wrote: > >> Obviously if the intermediary saw anything other than a 2xx, it >> couldn’t cache the request body. But even if it did see a 2xx >> response, the semantics of PUT (namely, that the origin server >> may do anything it wants with the request body) would seem to >> absolutely preclude caching by intermediaries. > > That;s an arguable point. Some people think it's OK for the origin > server to do anything it wants with a PUT body and still return a 200 > OK. Some people think that's not acceptable. The semantics of PUT are > not in consensus within the community. To my mind if a server understands a representation - rather than treating the entities as opaque and requiring a relatively agnostic storage in a file-system - then it may act as interpretted. This can include updating other representations of the same resource so that all representations are synch'ed. This seems both a natural result of the fact that a single resource can have multiple representations (unless those multiple representations directly contradicting each other is seen as perfectly acceptable) and the only practical way of dealing with cases where the state of a resource is not internally modelled by "flat" storage of entities (I do not see it as practical for example to have to store the XML that was responsible for a change in a resource that can be altered by PUTting some XML to ensure that the same comments, namespace prefixes, encoding, and other implementation-dependant artefacts will be returned on GET). It could also to my mind - and here I'm no doubt going to be more controversial - mean an ammendment rather than a replace ONLY IF the entity received has something in it's syntax which means "Value for X unknown/unstated" in which case the server could take the value for X it already has and maintain it after the PUT. > Given that some servers may change the body of the PUT because some > developers believe that's OK, it is probably not safe for a cache to > store a PUT body, whether that would be legal according to the spec or not. > > Perhaps cache-control headers in PUT responses could be used to clarify > this? E.g. id the server is going to change the body it should send > cache invalidating headers in the PUT response? There's less ambiguity here: "If the request passes through a cache and the Request-URI identifies one or more currently cached entities, those entries SHOULD be treated as stale. Responses to this method are not cacheable." (RFC 2616 §9.6) Which tells us what cache's should do in this case. Notably it does indeed correspond with a situation where the server's reaction to a PUT is not to update the resource in such a way that a GET will return the same entity - the cache will merely clear it's records and not assume anything about what the server has done.
[ ok this is slightly off topic of REST discuss, but I thought I was
replying to rest discuss, and since
I put a lot of energy into replying, I'll forward it there anyway.
It may be interesting for others too. ]
On 17 May 2007, at 01:29, Steve Loughran wrote:
> On 5/12/07, Henry Story <henry.story@...> wrote:
>
>>
>> The message is simple: RDF and REST form a perfect mesh. URI's name
>> Resources which have multiple representations, REST of course stands
>> for Representation state transfer, and RDF is the simplest possible
>> way to describe Resources. They are three parts of the triangle. Of
>> course I don't have to convince anyone on this list of the power,
>> simplicity and clarity of REST. I do urge RESTafarians though to
>> start look at RDF more carefully (not necessarily the xml version) as
>> the other side of the task they are endeavoring to accomplish.
>>
>> Only the simplest possible thing on the web can work. URIs, REST and
>> RDF are each perfections of simplicity that mesh together perfectly.
>
Examples 1 and 2 need rules of some sort. This is being worked on at
the W3C. The W3c works step by simple step, which is important if one
wants to get something this important our right.
> 1. How do I express a fact about all pages in my site?
You can see how it could be done with this simple SPARQL example
CONSTRUCT { ?url :ownedBy <http://bblfish.net/people/henry/card#me> . }
WHERE {
?url a :InformationResource .
FILTER REGEX( str(?url), "http://bblfish.net/.*" )
}
> 2. how do I express falsehood (None of my pages need cookies)?
Truth and Falsehood is an interesting concept by the way. It is a
relation on sentences, or sets of sentences.
This is well known from Tarski's work taken up later by Donald
Davidson. Facts are not true or false,
sentences are.
"Snow is white" is true in English if and only if Snow is white .
is the well known example. This is why truth is known as a
disquotational function. It removes the quotes from the asserted
sentence. In practical terms, if I believe that something you say is
true, then I believe it. I accept the content of your sentence into
my belief store.
In RDF this means that we have to assert truth or falsity on graphs
of sentences.
CONSTRUCT { ?g a :Falsehood . }
WHERE {
GRAPH ?g {
?url a :NeedsCookie .
}
FILTER REGEX( str(?url), "http://bblfish.net/.*" )
}
> 3. how do I resolve things efficiently in the absence of the
> assumption that non-provable!=false.
Mhh. Ok here I am not completely sure. There is a lot of stuff in
this question, and I can't unpack it all myself with my current
knowledge.
One way to look at it is to ask the question: does it make sense to
state in an open world that non provable is false. The web is open,
so to treat it as closed would be a fundamental mistake.
Secondly, efficiency has to do with algorithms for working on the
data. You can of course decide to reason about the data given to you
as you wish. If making closed world assumptions in particular cases
is more efficient and mostly right, then you are completely free to
do that.
Finally I think part of the trick is to use graphs, again. Graphs are
really, really important aspects of understanding the semantic web,
which is not apparent in the initial, very general, rdf
specification, though in some sense it is there in full view. The
whole semantic web requires the open world assumption, and so the
Semantic Web engineers were correct to use an open logic at the most
general level. After all we should always be open about what we can
learn, and we had better work within that framework if we are going
to scale to the web.
Within a graph though one can make statements about what appears
there or does not.
CONSTRUCT { ?p a :Vagabond . }
FROM <http://bblfish.net/people/henry/card>
WHERE { <http://bblfihs.net/people/henry/card> foaf:primaryTopic ?p .
OPTIONAL { ?p contact:home ?j } .
FILTER ( ! bound(?j) )
}
So if you have extra information from somewhere that all foaf files
on a certain place have contact information for everybody except
vagabonds, then feel free to use the above query.
> Dont get me wrong, I have to not only tolerate the Jena team as near
> neighbours, but even hang round with some of the RDF folk and consider
> when to use it appropriately. I just find some aspects of the whole
> model limited, even when you use the (richer) N3 notation...I think
> the underlying problem is there's no easy way to represent negativity
> in an open world, unlike prolog's horn-clause subset of first order
> predicate calculus.
Ok. So a lot has happened over the last few years. A lot of exciting
things. It may be worth revisiting some of those assumptions :-)
Henry
> -steve
> What should be stateless is the interaction. I like to think of it as > context-less interaction where everything has to be explicitly > stated. A delete request must state exactly what to delete rather than > having the server assuming the last item looked. Yohanes, I apologize for the interjection but your statement seems to be contradictory to the comments on shopping cart example. If I understand your vision of the example correctly, then server needs to keep context in order to know whose shopping cart is to be retrieved and returned to the client (assuming there may be multiple concurrently connected clients). Therefore, gradual accumulation of shopping cart contents conflicts with the context-less (or stateless in other words) requirement. All the best, Hovhannes --- In rest-discuss@yahoogroups.com, Yohanes Santoso <yahoo-rest- discuss@...> wrote: > > Ittay Dror <ittayd@...> writes: > > > according to the above, there shouldn't be any shopping cart > > resource. > > The quoted text does not preclude the existence of a personalised > shopping cart resource that exists across requests whose state is kept > by server. > > > > as a client, i just browse a web site for items i want, > > then post them all, in one go, to http://example.com/ checkout. > > You would also want to reserve what you have picked out > immediately. Each 'Buy Me' link could be a POST to a shopping cart > resource, say, http://example.com/users/alice/shoppingcart, to > incrementally accumulate purchases by changing shopping cart > resource's server-side state each time. > > So, when checking out, you could get the current cart state from the > server to present to the user. Then the cart state could be sent back > to the server so there is no confusion as to which cart state to check > out: the 'current' one which may have changed outside of the user's > knowledge, or the one that the user saw. > > If you expect the cart to hold numerous items such that the bandwidth > cost of shipping the item list back and forth is prohibitive, you > could expose the shopping cart as a versioned resource. Then you can > checkout a specific version of the cart. > > > > am i right? > > What should be stateless is the interaction. I like to think of it as > context-less interaction where everything has to be explicitly > stated. A delete request must state exactly what to delete rather than > having the server assuming the last item looked. > > YS. >
Jon, Ittay, I agree with both of you in that RESTless server encapsulates the knowledg (or part of the knowledge) of how the data should be processed. However, it does not store the actual data it rather expects the client to provide all the data necessary to process the request. That can be the shopping card contents. My vision of RESTful shipping cart interation is as follows: Client browses Amazon web site User selects on "add to cart" link Buy submits /cart?action=add&id=XXXX Server response includes id=XXXX (it can be XML or checkout URI with parameters - /cart?action=checkout?id=XXXX) User selects another item Request goes to server as /cart?action=add&id=XXXX&id=YYYY Server responds with /cart?action=checkout?id=XXXX&id=YYYY User selects "checkout" - client side code submists the "checkout" request as instructed by server. There can be many vairations of this scheme. What's important to me is that each and every request has all the dynamic data. In other words, the session state, sometimes also referred as session context. I deliberatly used ID's in the example aiming to demonstrate use of persistent server side data. To me server may and often will have constant data necessary to process requests. That can be inventory prices, availability etc. The important distinction, I believe, that server side data does not change in the course of normal operation. --- In rest-discuss@yahoogroups.com, Jon Hanna <jon@...> wrote: > > Ittay Dror wrote: > > so let's take the case of the shopping cart. according to the above, > > there shouldn't be any shopping cart resource. as a client, i just > > browse a web site for items i want, then post them all, in one go, to > > http://example.com/checkout. am i right? > > A server could have a bunch of shopping carts. > > It could know that certain authentication parameters (whether RFC2617 > username and username/realm/password hash, or whatever) is needed to > access the cart. > > It could know a lot of things about how that shopping cart relates to > other resources. > > None of these things require it to know anything about any state held in > any webbrowser or other client. >
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "hovhannes" == hovhannes tumanyan <hovhannes_tumanyan@...> writes:
hovhannes> Buy submits /cart?action=add&id=XXXX
This doesn't look like REST...
We have only four verbs, remember?
- --
All the best,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGUVoGIyuuaiRyjTYRArpGAJ48FTD4tsDPSunOY+2U5nI8wURkYACeMhJR
ltp9hxV3JXNBMTzOB5stUFQ=
=GNca
-----END PGP SIGNATURE-----
hovhannes_tumanyan wrote: > If I understand your vision of the example correctly, then server > needs to keep context in order to know whose shopping cart is to be > retrieved and returned to the client (assuming there may be multiple > concurrently connected clients). Therefore, gradual accumulation of > shopping cart contents conflicts with the context-less (or stateless > in other words) requirement. Not at all. A server can maintain a shopping cart resource identified by a URI. The state of this resource would include what authentication information is needed to access it, it's contents and so on. The client can then use the cart's URI to identify it to the server. The important thing is that the cart is being identified explicitly by the client, rather than the server identifying the client (through session keys, cookies etc) and then finding the cart for that client.
hovhannes_tumanyan wrote: > Jon, Ittay, > I agree with both of you in that RESTless server encapsulates the > knowledg (or part of the knowledge) of how the data should be > processed. However, it does not store the actual data it rather > expects the client to provide all the data necessary to process the > request. That can be the shopping card contents. The server can indeed store data. The client "providing all the data necessary to process the request" can be done by using the URI that the server is using to identify a shopping cart resource. > Client browses Amazon web site While we shouldn't focus on any particular site too much there is a danger in paying attention to Amazon in particular because they have a webservice that is often referred to as RESTful but which isn't. Their non-RESTful service is no worse than any other non-RESTful webservice (in many ways it's very good) but just the fact that it gets cited so often incorrectly as an example of something that is RESTful is a problem in itself. > User selects on "add to cart" link > Buy submits /cart?action=add&id=XXXX That looks like GET being used perform an unsafe operation. That's highly un-RESTful because it means we have GET doing something that GET promises not to do. There's no reason to suppose that a user has selected an "add to cart" link when something GET's /cart?action=add&id=XXXX, it could well have been a read-ahead cache and not a user at all. Caching becomes a mess. And other problems will ensue. > There can be many vairations of this scheme. What's important to me > is that each and every request has all the dynamic data. That's not important at all. Webserver's can maintain data, that's what webserver's do. Indeed, even with your method it's going to have to finally change it's data on the final checkout of the cart. The important thing is that the webserver changes it's data due to unsafe methods (POST, PUT, DELETE) and that it's data be retrieved do to operations on resources identified by URIs rather than the webserver "knowing" about the client or "session". Webserver data is webserver data, it shouldn't have data about the client. > In other > words, the session state, sometimes also referred as session context. REST does not have session state, session-state-by-the-back-door is still session state. Let's look at how your resources are being partitioned. If you have added items 12345 and 64245 to your cart and go to look at item 92741 then you would have something like: /item?id=92741&cart=12345&cart=64245 In other words you don't have a resource identified by /item?id=92741 that is that could be labelled in human-readable terms "Item number 92741" you have "Item number 92741 while looked at by someone who has items 12345 and 64245 in their cart and no other items". How often is that resource going to be used? One of the things that makes a resource valuable is how often it gets looked at (caching is just one heavy advantage here). The technique you suggest can be useful enough in cases of "wizzard" interfaces for a handful of well-defined steps where information is gathered before used in a final POST or PUT to perform the task in one transaction; a resource of "step 2 of the wizard for a user who has selected choices B, C and E" only affects the operation of the rest of that wizard, but the value of wizards falls down once you have more than a finite handful of clear steps. Having such a resource map for the part of a site that handles the user entering shipping and billing information before finally confirming it would, I'd say, be harmless enough, but having a such a resource map for most of a site is very weak design. > I deliberatly used ID's in the example aiming to demonstrate use of > persistent server side data. To me server may and often will have > constant data necessary to process requests. That can be inventory > prices, availability etc. The important distinction, I believe, that > server side data does not change in the course of normal operation. PUT, POST and DELETE are in HTTP for a reason.
On 21 May 2007, at 05:05, hovhannes_tumanyan wrote: > Request goes to server as /cart?action=add&id=XXXX&id=YYYY > Server responds with /cart?action=checkout?id=XXXX&id=YYYY > User selects "checkout" - client side code submists the "checkout" > request as instructed by server. So what happens if the user presses 'back' in his browser and add another item from his history? He will then request /cart?action=add&id=XXXX&id=ZZZZ ..and loose his YYYY item. The only issue I can see with a more 'traditional' RESTian shopping cart - as a real resource - is that the client will need to find out what is 'his' cart, and that in a HTML variant (with rather dumb clients) it means that from the product representation resource it would be more difficult to have proper 'Add this product to your cart' links - because the link would vary by the user unless you do some redirection tricks. I won't comment on the ?action= URIs. -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
Jon Hanna <jon@...> writes: > hovhannes_tumanyan wrote: > prices, availability etc. The important distinction, I believe, that >> server side data does not change in the course of normal operation. > > PUT, POST and DELETE are in HTTP for a reason. hovhannes_tumanyan, I think if you can accept that the state of server-side resources can change, the whole picture becomes clearer. To elucidate on Jon Hanna's reply, the change should be caused by POST, PUT or DELETE (if you are using HTTP anyway) and never by GET. YS.
On 5/16/07, Costello, Roger L. <costello@...> wrote: > Andrzej Jan Taramina wrote: > > > I would pay good coin to have a book by Roy Fielding in my library > > +1 http://proquest.umi.com/pqdlink?did=727728331&Fmt=7&clientId =79356&RQT=309&VName=PQD You too can buy a bound copy of Roy's dissertation via ProQuest. =) -- justin
Yohanes, I agree that server state alterations, if required due to the nature of the application, should be accomplished by PUT, POST and DELETE. However, I respectfully disagree with the statement that interactions in my example modify the server state - server responses will be consistently the same no matter who and no matter how many times requests in my example are submitted. Do you agree? Thank you for your time and patience, Hovhannes --- In rest-discuss@yahoogroups.com, Yohanes Santoso <yahoo-rest- discuss@...> wrote: > > Jon Hanna <jon@...> writes: > > > hovhannes_tumanyan wrote: > > > prices, availability etc. The important distinction, I believe, that > >> server side data does not change in the course of normal operation. > > > > PUT, POST and DELETE are in HTTP for a reason. > > > hovhannes_tumanyan, > > I think if you can accept that the state of server-side resources can > change, the whole picture becomes clearer. > > To elucidate on Jon Hanna's reply, the change should be caused by > POST, PUT or DELETE (if you are using HTTP anyway) and never by GET. > > YS. >
[ Attachment content not displayed ]
Hello, My software engineering feeling says to goto REST. In my work we already use the HTTP as the interface for system integration. In fact it already looks quite a bit like it. My proposal is to switch to REST entirely, but argumentation is (understandable) needed. Examples we use: POST ../test1?cmd=run GET ../test2?cmd=delete My argumentations: * It is more "standard" * Tooling available: RestLet framework, Frevo (maybe not much today but it has certainly a lot of potential, see Microsoft initiatives) * SOAP starts supporting REST (they do not do this just like that) Counter arguments from engineers: * We do this already, so why change? * REST is just a hype, so why follow? I am looking for more pro-arguments, so you RESTarians can help me ?! Regards, Roger van de Kimmenade
rogervdkimmenade wrote: > My argumentations: > * It is more "standard" > * Tooling available: RestLet framework, Frevo (maybe not much today > but it has certainly a lot of potential, see Microsoft initiatives) > * SOAP starts supporting REST (they do not do this just like that) A big one to my mind is that if you are already using HTTP you are already dealing with a RESTful system whether you want to or not. It stops being a matter of whether or not to do REST as whether or not to do it right.
On Tue, 2007-05-22 at 13:45 +0000, rogervdkimmenade wrote:
> Examples we use:
> POST ../test1?cmd=run
> GET ../test2?cmd=delete
Hmm. Others might quibble about the command/operation name in the URL
of the POST, but I won't. But it's hard to see how a delete done as a
GET is acceptable.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
* rogervdkimmenade <rvdkimmenade@...> [2007-05-22 15:50]: > Counter arguments from engineers: > * We do this already, so why change? Someone who moved from structured to object-oriented programming by creating a single object and making every function a method of this object could equally ask: “I do this already, so why change?” Yeah, it uses OOP syntax, and it even works. But that doesn’t mean it’s good design. You get only a minimum of the benefits of OOP, if any. Likewise, using HTTP in the way your fellow engineers are using it works, sure. But much of the value of HTTP is diminished. You could be getting much more from it. > * REST is just a hype, so why follow? REST has been around for two decades. REST is the sum of practices that have worked on the web. The term “REST” is new and indeed hyped, but REST is old and proven. REST is here to stay. Fight it at your own peril. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
[ Attachment content not displayed ]
Roger van de Kimmenade wrote: > "But much of the value of HTTP is diminished" > What values? I mean we already use HTTP, No you don't. If you are using GET to delete resources you aren't using HTTP, you're tunnelling through port 80 and making it look like HTTP so that it gets through. > but what could we be missing? Safety, scalability, reliability, interoperability, a clear modelling of the system. You can also more easily bring a new person on the team. If that person is experienced in the web they know what GET does and what DELETE does. You don't need to document how to delete a resource, merely which resources can be deleted and what criteria (authentication etc.) may affect the deletion. If you use GET to do a delete then you have to explain to them that you aren't using HTTP at all and it just looks confusingly similar to HTTP and then explain what you actually are doing. (Just hope that this new guy didn't have a read-ahead cache on his browser or it'll have deleted everything while you were explaining this to him).
[ Attachment content not displayed ]
Roger van de Kimmenade wrote: > Ok then, why convert a protocol, that is intuitive, to resources and > CRUD operations. You have this the wrong way around. The question you have to answer is why did you convert a protocol deals with CRUDE (don't forget the E - we can do Execute-like operations too) operations on resources in an intuitive manner to one that doesn't? You can hardly claim that when given the choice of GET, PUT, POST and DELETE that GET was the intuitive choice for deleting something. That's highly counter-intuitive, as I said if I was dropped into that project as a developer I'd be confused as can be until I decided to just drop every thing I knew and assumed you weren't using the protocol I'm familiar with at all. For someone who isn't familiar with the details of HTTP, they are still used to the web and to HTTP in practice. GETting a representation of a URI and then performing other GETs on the basis of that with occasional POSTs (PUTs and DELETEs in rarer case) is what every five year old is now used to. If my 5 and 7 year old can find their way around the web (and my 3 year olds are beginning to get the basic idea), it must pass a certain level of intuitiveness. > PS Another benefit that comes into mind, is the bookmarking feature. > In case a POST is used to one and only one URL than this is not possible > and this is with the REST-style cleaner > and possible. Bookmarking is a concrete example of the general point of interoperability and of extensibility. Simply put if I give you a URI you can run GET on it and get a representation that lets you then go on to do many other things - that's a lot of power in one little string. Bookmarking is an example of that, and a very powerful one in practice.
On Tue, 2007-05-22 at 16:27 +0200, Roger van de Kimmenade wrote:
> "But much of the value of HTTP is diminished"
> What values? I mean we already use HTTP, but what could we be missing?
> We already do it REST like only not right (as already mentioned in
> this thread), but why
> should we do it right when it already works.
Because there's usually a gradient of "working".
People who want "REST" seem to want (at least) one of:
- classical HTML with forms, with thought put into the URLs and
how the forms work.
- a data-/resource-focused API.
- an RPC replacement; XML/HTTP.
It's unclear which one you're targeting. It sounds like you're already
using HTTP, HTML and whatnot for your admin interface, but maybe want
something else...?
In any case, there are usually things one can do to more fully leverage
HTTP. Not having unsafe operations activated by GET allows "web
accelerators" to work. Sending the appropriate cache-control headers
allows both the browser and intermediary caches to work. Having a
resource- and application-state-focused URL space is the underpinning
for much of it. Using the HTTP operations as they're defined is
important. Using the HTTP response codes as they're defined helps.
You can use HTTP as a simple transport protocol, or you can really use
it as an application-level transfer protocol. The value of HTTP is in
the latter.
--
...jsled
http://asynchronous.org/ - a=jsled;b=asynchronous.org; echo ${a}@${b}
type "why rest" into google and you should see several references that can give you both pro- and anti- arguments for REST. the two big things i've gained from studying on this topic are: - uniform interface (GET, POST, PUT, DELETE) - inherent cacheability (designing to support third party caching - even if the 'third party' is internal) mamund On 5/22/07, rogervdkimmenade <rvdkimmenade@...> wrote: > Hello, > My software engineering feeling says to goto REST. > > In my work we already use the HTTP as the interface for system > integration. In fact it already looks quite a bit like it. > My proposal is to switch to REST entirely, but argumentation is > (understandable) needed. > Examples we use: > POST ../test1?cmd=run > GET ../test2?cmd=delete > > My argumentations: > * It is more "standard" > * Tooling available: RestLet framework, Frevo (maybe not much today > but it has certainly a lot of potential, see Microsoft initiatives) > * SOAP starts supporting REST (they do not do this just like that) > > Counter arguments from engineers: > * We do this already, so why change? > * REST is just a hype, so why follow? > > I am looking for more pro-arguments, so you RESTarians can help me ?! > > Regards, > Roger van de Kimmenade > > > > > Yahoo! Groups Links > > > > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
Josh Sled wrote: > Not having unsafe operations activated by GET allows "web > accelerators" to work. The situation is worse than if web accelerators didn't work. The web accelerators will continue to work perfectly well as far as they're concerned - they'll just be blissfully unware of all the items being deleted on the server as they follow the links.
> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Roger van de Kimmenade > Sent: Tuesday, May 22, 2007 7:45 AM > To: Jon Hanna > Cc: Rest List > Subject: Re: [rest-discuss] Why REST? > > Ok then, why convert a protocol, that is intuitive, to resources and CRUD operations. > This can be quite artificial, just like UML diagrams can be completely correct but hard to understand. In this case the intuitive protocol is /your/ protocol, it's not one anybody outside your group would understand. And it's not one that will work with all the other tools and infrastructure out there. Yet if you advertise the protocol in your messages as being "HTTP/1.1" then there will be some 'issues'. This is what your protocol looks like to the rest of the world: POST uri_01 GET uri_02 When a Google spider or read-ahead caching control encounters "uri_02" in a document, it will retrieve content because the protocol you advertise has defined that as being 'safe'.
Bill, On 17.05.2007, at 01:17, Bill de hOra wrote: > > s = Server(args) > doc = s.get_document(doc_url) > doc = edit(doc) > try: > s.save(doc) > except ServerException, e: > log(e) > > > I see this programming style a lot. I'm not sure what to make of it, Hmm, can you explain your intention a bit further? I am having trouble seeing your issue. Regarding the code I think the wording is just wrong. What is named 'Server' there looks more like a user agent to me. As in $ua = new LWP::UserAgent(); $res = $ua->get( $uri ); or any client side library (e.g. an APP/Atom specific one). The coupling you talk about seems to me to be the coupling between the code that uses the library and the media type the library is implemented for. Does that help in any way? Jan > but > I suspect that letting developers pretend that servers are in the same > address space as the client code results in problems; for example, the > Server class can be supplied by the server owner leading to tight API > coupling, despite documents travelling over HTTP . That said, I'm not > sure what the sensible alternatives are, but I think the problems are > independent of how you design your web application, ie, cool URLs > won't > necessarily help. > > cheers > Bill > > > > Yahoo! Groups Links > > >
Hi,
I'm new to the world of REST web services. I've done a bit of study on
the subject, and I'm ready to start trying out some ideas. Before I
stumble forward, though, I want to see what the general consensus is on
a few issues that my reading hasn't really clarified.
The first one centers on the notion of encapsulation. For example, if I
am building an application from objects, most good advice urges you to
encapsulate the object's state where possible. For example, if I design
a class that can be either COMPLETE, or INCOMPLETE, a design that
exposes methods like:
object.MarkComplete and
object MarkIncomplete
to effect a change to the CompleteState and
object.IsComplete and
object.IsIncomplete
to observe the state seems to be generally preferred over a design that
gives clients direct access to the object's state through "setter" and
"getter" methods like:
object.SetCompleteState(COMPLETE/INCOMPLETE) and
object.GetCompleteState
or maybe
object.IsComplete(bool) and
bool object.IsComplete
My question is what to do about exposing an object like this as a
resource. The basic approach that seems most common would seem to
prefer a "setter/getter" approach by passing a representation like this:
<object>
<completeState>INCOMPLETE</completeState>
</object>
into and out of the restful web service. This seems to me to fly in the
face of the notion of encapsulation.
Assuming encapsulation of this sort is an important feature of RESTful
design (Is it?), I am thinking of different ways to approach this to
preserve the encapsulation.
The first idea I had would change the representation so that it's
contents are more "method-like" than "data-like":
<object>
<MarkComplete/>
</object>
or
<object>
<MarkIncomplete/>
</object>
for PUTs
and
<object>
<IsComplete/>
</object>
for GETs
So when the representation is passed into the web service through a PUT
or POST, the presence of the tag is essentially an indication to the
service to apply the "MarkComplete" method to the resource. This seems
a lot like protocol tunneling to me, so I assume it's poor design. Maybe
there's a distinction that is eluding me, so I thought I would include
it.
The next idea I had was to expose resources representing the collection
of "complete" items and the collection of "incomplete" items. In order
to change the state of a resource, I could POST the Uri of the resource
to the proper collection and code behind the scenes would take care of
updating the state appropriately.
This seems a lot more RESTful to me, but it seems to be problematic for
the client; the client will have to obtain the "complete" item list, for
example, and then search it in order to see if a given item's state is
COMPLETE.
So, next, I thought that perhaps I would embed the status into the
representation for testing set membership, but only allow modifications
using the list resources. So the client would see something like
before:
<object>
<IsComplete/>
</object>
or more probably:
<object>
<IsComplete/>
....
<MarkComplete href='..../lists/completeItems'/>
<MarkIncomplete href='..../lists/IncompleteItems'/>
</object>
or some similar variation.
Changing the <IsComplete/> to something else and then PUTting the new
representation back to the item's Url would be ignored. The state would
only change if the client POSTed the Url of the item to the appropriate
list.
This seems okay to me. It protects encapsulation in that the client has
no way to directly observe or change the resource's private state. It
also seems to me (remember, I am a novice) to follow REST principles.
The lack of symmetry for observing and changing the state seems a little
cumbersome to me, though.
So, which of these approaches is best from a design standpoint, and are
there others that are better?
Thanks in advance for any insight.
[ Attachment content not displayed ]
Hi Mike,
Thanks for the feedback. Can you tell me more about your thinking when
you say, "i have tended to use the first case. it seems more
resource-focused" What do you think makes the first option you mention
more "resource focused" than the other?
________________________________
i, too, am kinda new to this and have been working on similar issues.
it seems that one valid approach is to expose the following
(update the resource)
/objects/{id} PUT
<completeState>true</completeState>
200 ok
404 (not an object)
409 (object already marked complete)
another might be to expose a complete-state uri
(modify the list of complete objects
/objects/complete/{id} POST
200 ok
404 (not an object)
409 (object already marked complete)
i have tended to use the first case. it seems more resource-focused
.
<http://geo.yahoo.com/serv?s=97359714/grpId=4319255/grpspId=1705701014/m
sgId=8535/stime=1179869961/nc1=4507179/nc2=3848641/nc3=3>
Jan Algermissen wrote: > Bill, > > On 17.05.2007, at 01:17, Bill de hOra wrote: > >> >> s = Server(args) >> doc = s.get_document(doc_url) >> doc = edit(doc) >> try: >> s.save(doc) >> except ServerException, e: >> log(e) >> >> >> I see this programming style a lot. I'm not sure what to make of it, > > Hmm, can you explain your intention a bit further? I am having trouble > seeing your issue. Why am I targeting a server instead of the resources? > The coupling you talk about seems to me to be the coupling between the > code that uses the library and the media type the library is implemented > for. I was thinking of API creep away from uniformity and towards implicit identity: doc = s.get_document_for_user(docid, user) Suddenly the client API is usable for only one domain/problem. It might even end up sharing interfaces or exceptions with a server API. I was also thinking about treating the server as though it were actually in the same address space/stack. cheers Bill
Encapsulation in OO-land is used to prevent transitions to invalid states.
In REST, state transitions are accomplished by traversing links
(hypertext as the engine of application state). To add the flavor of an
invalid state transition to your example let's say that once completed, a
resource can't be made incomplete again. Then I think something like your
last approach works best:
<object>
<IsComplete/>
....
</object>
for complete resources, and
<object>
<IsIncomplete/>
....
<MarkComplete href='..../lists/completeItems'/>
</object>
for incomplete resources.
But if you want to get by without hypertext in this example, the server
would reject the POST of a representation like this
<object>
<completeState>INCOMPLETE</completeState>
</object>
to an already complete resource, probably with a 409 CONFLICT (The request
could not be completed due to a conflict with the current state of the
resource).
In either approach, encapsulation is being provided by the server's
business logic. In the first approach, the server doesn't hand out URLs
that would allow invalid state transitions. In the second approach the
server rejects representations that would cause invalid state transitions.
But it's not really in the representation that that's accomplished, it's
in the server logic.
Regards,
Kevin Christen
rest-discuss@yahoogroups.com wrote on 05/22/2007 03:22:30 PM:
> Hi,
>
> I'm new to the world of REST web services. I've done a bit of study
> on the subject, and I'm ready to start trying out some ideas.
> Before I stumble forward, though, I want to see what the general
> consensus is on a few issues that my reading hasn't really clarified.
>
> The first one centers on the notion of encapsulation. For example,
> if I am building an application from objects, most good advice urges
> you to encapsulate the object's state where possible. For example,
> if I design a class that can be either COMPLETE, or INCOMPLETE, a
> design that exposes methods like:
>
> object.MarkComplete and
> object MarkIncomplete
>
> to effect a change to the CompleteState and
>
> object.IsComplete and
> object.IsIncomplete
>
> to observe the state seems to be generally preferred over a design
> that gives clients direct access to the object's state through
> "setter" and "getter" methods like:
>
> object.SetCompleteState(COMPLETE/INCOMPLETE) and
> object.GetCompleteState
>
> or maybe
>
> object.IsComplete(bool) and
> bool object.IsComplete
>
> My question is what to do about exposing an object like this as a
> resource. The basic approach that seems most common would seem to
> prefer a "setter/getter" approach by passing a representation like this:
>
> <object>
> <completeState>INCOMPLETE</completeState>
> </object>
>
> into and out of the restful web service. This seems to me to fly in
> the face of the notion of encapsulation.
>
> Assuming encapsulation of this sort is an important feature of
> RESTful design (Is it?), I am thinking of different ways to approach
> this to preserve the encapsulation.
>
> The first idea I had would change the representation so that it's
> contents are more "method-like" than "data-like":
>
> <object>
> <MarkComplete/>
> </object>
>
> or
>
> <object>
> <MarkIncomplete/>
> </object>
>
> for PUTs
>
> and
>
> <object>
> <IsComplete/>
> </object>
>
> for GETs
>
> So when the representation is passed into the web service through a
> PUT or POST, the presence of the tag is essentially an indication to
> the service to apply the "MarkComplete" method to the resource.
> This seems a lot like protocol tunneling to me, so I assume it's
> poor design. Maybe there's a distinction that is eluding me, so I
> thought I would include it.
>
> The next idea I had was to expose resources representing the
> collection of "complete" items and the collection of "incomplete"
> items. In order to change the state of a resource, I could POST the
> Uri of the resource to the proper collection and code behind the
> scenes would take care of updating the state appropriately.
>
> This seems a lot more RESTful to me, but it seems to be problematic
> for the client; the client will have to obtain the "complete" item
> list, for example, and then search it in order to see if a given
> item's state is COMPLETE.
>
> So, next, I thought that perhaps I would embed the status into the
> representation for testing set membership, but only allow
> modifications using the list resources. So the client would see
> something like before:
>
> <object>
> <IsComplete/>
> </object>
>
> or more probably:
>
> <object>
> <IsComplete/>
> ....
> <MarkComplete href='..../lists/completeItems'/>
> <MarkIncomplete href='..../lists/IncompleteItems'/>
> </object>
>
> or some similar variation.
>
> Changing the <IsComplete/> to something else and then PUTting the
> new representation back to the item's Url would be ignored. The
> state would only change if the client POSTed the Url of the item to
> the appropriate list.
>
> This seems okay to me. It protects encapsulation in that the client
> has no way to directly observe or change the resource's private
> state. It also seems to me (remember, I am a novice) to follow REST
> principles. The lack of symmetry for observing and changing the
> state seems a little cumbersome to me, though.
>
> So, which of these approaches is best from a design standpoint, and
> are there others that are better?
>
> Thanks in advance for any insight.
>
>
> Thanks Kevin,
Sounds like perhaps my OO training is creeping into my understanding of
resources - I seem to be trying to view a Resource as a special kind of
web-accessible object. I think your explanation adds some clarity that
I was missing before. Thank you.
Generally in OO practice, encapsulation is a way to control the state
transitions that an object makes, as you noted. It is also used to
decouple clients of an object's behavior from the object's
implementation of that behavior ("program to an interface," "send
messages, implement method," etc.) So, I think I need to mull over the
kinds of assumptions a client of a resource (correct terminology?
consumer of resource?) might make about the resource, and which ones are
reasonable, and which are not.
You mention below that, "In REST, state transitions are accomplished by
traversing links (hypertext as the engine of application state)." I've
certainly come across this notion in my studies, but sometimes it gets
fuzzy, design-wise, as to which link should be traversed in order to
change which kind of state.
For instance, if I had a resource with a "Name" element, and I wanted to
change the state of that resource so that the value of that name element
was something different, I'd traverse the Link/Url associated with the
resource itself and execute a PUT with a representation that contains
the new value for Name. The semantics would be that upon completion of
this traversal, The state of the resource would contain a new Name. This
seems like the only really reasonable path forward for a state change
like this.
On the other hand, given the example we've been talking about, in order
to change the state of the resource, I have to traverse a different sort
of link. This link isn't a link to the resource I'm interested in, it's
a link to a different resource - the "complete" collection.
So, I guess there's another design question involved here: what
conditions favor using the link of a given resource to change the state
of the resource and when would you use a different link to cause the
change. Assuming encapsulation is meaningful and important (in some
sense), for resources, that's one thing to consider. Certainly, the
overall utility of exposing something like the "complete" collection as
a resource would be a consideration. Are there other important factors
to consider?
________________________________
From: Kevin Christen [mailto:Kevin_Christen@...]
Sent: Tuesday, May 22, 2007 3:34 PM
To: Peters, Daniel R
Cc: Rest List
Subject: Re: [rest-discuss] REST and encapsulation
Encapsulation in OO-land is used to prevent transitions to invalid
states. In REST, state transitions are accomplished by traversing links
(hypertext as the engine of application state). To add the flavor of an
invalid state transition to your example let's say that once completed,
a resource can't be made incomplete again. Then I think something like
your last approach works best:
<object>
<IsComplete/>
....
</object>
for complete resources, and
<object>
<IsIncomplete/>
....
<MarkComplete href='..../lists/completeItems'/>
</object>
for incomplete resources.
But if you want to get by without hypertext in this example, the server
would reject the POST of a representation like this
<object>
<completeState>INCOMPLETE</completeState>
</object>
to an already complete resource, probably with a 409 CONFLICT (The
request could not be completed due to a conflict with the current state
of the resource).
In either approach, encapsulation is being provided by the server's
business logic. In the first approach, the server doesn't hand out URLs
that would allow invalid state transitions. In the second approach the
server rejects representations that would cause invalid state
transitions. But it's not really in the representation that that's
accomplished, it's in the server logic.
Regards,
Kevin Christen
[ Attachment content not displayed ]
[ Attachment content not displayed ]
My two cents worth... I don't think you should argue with your engineers.. about this. They are probably right. Do the simple things that work until they don't work. At that point, the better you understand REST, the better position you are in to make a compelling proposal. But you're probably never going to get to that point. REST is not needed for all applications. Now get some rest, Walden ----- Original Message ----- From: rogervdkimmenade To: rest-discuss@yahoogroups.com Sent: Tuesday, May 22, 2007 8:45 AM Subject: [rest-discuss] Why REST? Hello, My software engineering feeling says to goto REST. In my work we already use the HTTP as the interface for system integration. In fact it already looks quite a bit like it. My proposal is to switch to REST entirely, but argumentation is (understandable) needed. Examples we use: POST ../test1?cmd=run GET ../test2?cmd=delete My argumentations: * It is more "standard" * Tooling available: RestLet framework, Frevo (maybe not much today but it has certainly a lot of potential, see Microsoft initiatives) * SOAP starts supporting REST (they do not do this just like that) Counter arguments from engineers: * We do this already, so why change? * REST is just a hype, so why follow? I am looking for more pro-arguments, so you RESTarians can help me ?! Regards, Roger van de Kimmenade __________ NOD32 2285 (20070522) Information __________ This message was checked by NOD32 antivirus system. http://www.eset.com
I'd like to investigate a couple REST aspects. Assume we have a Resource R. R can transition through a number of states. Let's say the states are S1, S2, S3, and S4. Outside clients can only directly initiate transitions from S1 to S4 and from S4 to S1. As a result of initiating a transition from S1 to S4, R will pass through S2 on the way to S4. As a result of initiating a transition from S4 to S1, R will pass through S3 on the way to S1. A client can sample R at any time and see the state of R in any of the states S1, S2, S3, and S4. Without being RPCish, I'm having a difficult time imagining how to structure URIs that allow a client to transition R from S1 to S4 and S4 to S1. For example I could imagine a GET to R returning something like: <R> <State>S1</State> </R> I can imagine a client performing a PUT to R/State, changing its value to S4. This doesn't feel right because a subsequent GET might result in the client seeing <State>S2</State>. This might be confusing as the PUT returned good status, yet the client doesn't see S4 in the State element. This also doesn't feel right from the POV of "Hypermedia being the engine of application state". There's nothing in the representation that _guides_ the client to a URI for initiating a _valid_ state change. For example, there's nothing that says to the client that the only valid thing you can do to R at this point is initiate a transition to S4. What kind of makes sense is a Get to R returning something like: <R> <State>S1</State> <TransitionUri>R/S4</TransitionUri> </R> A client can then perform an "empty" PUT to R/S4. A Get of R might then look like: <R> <State>S2</State> </R> Here, there is no TransitionUri element because no external transition capability exists while R is in S2. Upon reaching S4, a Get of R might look like: <R> <State>S4</State> <TransitionUri>R/S1</TransitionUri> </R> Is this on-track, or is there yet a "better" representation and URI structure? Thanks.
> For example I could imagine a GET to R returning something like: > <R> > <State>S1</State> > </R> > > I can imagine a client performing a PUT to R/State, changing its value to S4. I think that you would perform a PUT to "R", not some other resource "R/State". I've seen people talking about portions of a document (the <State> element) as if it were the resource, or was automatically addressable. If it were addressable, the document should indicate that (which you talk about as well). > This doesn't feel right because a subsequent GET might result in the client seeing <State>S2</State>. > This might be confusing as the PUT returned good status, yet the client doesn't see S4 in the State element. > What kind of makes sense is a Get to R returning something like: > <R> > <State>S1</State> > <TransitionUri>R/S4</TransitionUri> > </R> I don't think putting the state (or state identifier) of a resource into a resource identifier is the right thing to do. Other requests may have been sent and so the resource may be in any of the four valid states. Consider another client C2 sending a request to transition to S1, when the first client later retrieves a representation, it could be in S1 which is not the state S4 that it submitted. If making intermediate states has no purpose, then don't expose them. If it does have a purpose, then it's okay that the state of a retrieved representation isn't what was submitted. ________________________________ From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jim Sievert Sent: Tuesday, May 22, 2007 7:46 PM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] On "Hypermedia is the engine of application state" and replacing RPC... I'd like to investigate a couple REST aspects. Assume we have a Resource R. R can transition through a number of states. Let's say the states are S1, S2, S3, and S4. Outside clients can only directly initiate transitions from S1 to S4 and from S4 to S1. As a result of initiating a transition from S1 to S4, R will pass through S2 on the way to S4. As a result of initiating a transition from S4 to S1, R will pass through S3 on the way to S1. A client can sample R at any time and see the state of R in any of the states S1, S2, S3, and S4. Without being RPCish, I'm having a difficult time imagining how to structure URIs that allow a client to transition R from S1 to S4 and S4 to S1. For example I could imagine a GET to R returning something like: <R> <State>S1</State> </R> I can imagine a client performing a PUT to R/State, changing its value to S4. This doesn't feel right because a subsequent GET might result in the client seeing <State>S2</State>. This might be confusing as the PUT returned good status, yet the client doesn't see S4 in the State element. This also doesn't feel right from the POV of "Hypermedia being the engine of application state". There's nothing in the representation that _guides_ the client to a URI for initiating a _valid_ state change. For example, there's nothing that says to the client that the only valid thing you can do to R at this point is initiate a transition to S4. What kind of makes sense is a Get to R returning something like: <R> <State>S1</State> <TransitionUri>R/S4</TransitionUri> </R> A client can then perform an "empty" PUT to R/S4. A Get of R might then look like: <R> <State>S2</State> </R> Here, there is no TransitionUri element because no external transition capability exists while R is in S2. Upon reaching S4, a Get of R might look like: <R> <State>S4</State> <TransitionUri>R/S1</TransitionUri> </R> Is this on-track, or is there yet a "better" representation and URI structure? Thanks.
On 5/22/07, Jim Sievert <james.sievert@...> wrote: > > > > I'd like to investigate a couple REST aspects. Assume we have a Resource R. R can transition through a number of states. Let's say the states are S1, S2, S3, and S4. Outside clients can only directly initiate transitions from S1 to S4 and from S4 to S1. As a result of initiating a transition from S1 to S4, R will pass through S2 on the way to S4. As a result of initiating a transition from S4 to S1, R will pass through S3 on the way to S1. A client can sample R at any time and see the state of R in any of the states S1, S2, S3, and S4. > > Without being RPCish, I'm having a difficult time imagining how to structure URIs that allow a client to transition R from S1 to S4 and S4 to S1. For example I could imagine a GET to R returning something like: > > <R> > > <State>S1</State> > > </R> > > I can imagine a client performing a PUT to R/State, changing its value to S4. This doesn't feel right because a subsequent GET might result in the client seeing <State>S2</State>. This might be confusing as the PUT returned good status, yet the client doesn't see S4 in the State element. > I don't understand why the server is stuck in S2 in your example. But if you are saying that the legal transition from S1 was to S2, then you left something out of your hypertext in the "GET from R" : <form action="transition-to-s2" method="put"> <input type="hidden" name="new-state" value="2"> <input type="submit"> </form> (with liberties from html5). Now clients can only do that one legal state transition. Forms are an essential part of the web, but not necessarily of the REST style. There is probably a way you could accomplish the same thing my minting MIME types, using the ACCEPT header, etc. Hugh
On 5/22/07, Mike Dierken <dierken@...> wrote: > > I can imagine a client performing a PUT to R/State, changing its value to > S4. > I think that you would perform a PUT to "R", not some other resource > "R/State". Yeah, I missed that part. I replied thinking you were PUTting a new state to R, as you ought.
mike:
On 5/22/07, Mike Dierken <dierken@...> wrote:
<snip>
> I've seen people talking about portions of a document (the <State> element)
> as if it were the resource, or was automatically addressable. If it were
> addressable, the document should indicate that (which you talk about as
> well).
</snip>
i was one who posted a sample that contained a 'partial' resource and
am curious about following up on this.
i've implemented a pattern that defines an xml resource document that
contains several elements.
the schema for POSTing that resource to the server (address =
/objects/) has, say, three required elements (name,size, amount).
a GET to the same address returns a list of links to available objects
(/objects/{id}) that has a slightly different schema since the server
can return some additional data in the requested resource.
a PUT to the object address (/objects/{id}) can have one or more of
the same three elements from the POST schema. in my case, any element
that is not included in the PUT is left untouched in the store on the
server.
there's nothing in this description that breaks the REST model, correct?
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
"hovhannes_tumanyan" <hovhannes_tumanyan@...> writes: > Yohanes, > However, I respectfully disagree with the statement that interactions > in my example modify the server state - server responses will be > consistently the same no matter who and no matter how many times > requests in my example are submitted. Do you agree? > User selects on "add to cart" link > Buy submits /cart?action=add&id=XXXX > Server response includes id=XXXX (it can be XML or checkout URI with > parameters - /cart?action=checkout?id=XXXX) > User selects another item > Request goes to server as /cart?action=add&id=XXXX&id=YYYY > Server responds with /cart?action=checkout?id=XXXX&id=YYYY > User selects "checkout" - client side code submists the "checkout" > request as instructed by server. In the interaction above, you are passing back and forth the item list (cart's state). You were saying that the interaction technique above avoids the need for the server to change the state of one of the resources it controls. It could and that is up to you. It is good to have some sort of a reservation system. You want to be able to overbook an item to a certain degree (since not all users putting the item into the cart will check it out) and inform user that an item is out-of-stock once the threshold is crossed. For fairness, you'd also probably want to implement a first-come-first-serve policy to item reservation. It is definitely bad for business for a customer who takes a first swipe at a very popular and limited item then goes on shopping some more to discover that putting items into the cart does not actually reserve it. Such reservation system changes some server-side states. So, yes, you can be correct in maintaining that the above interaction does not change server-side state, but if I were you, I don't want to be correct on that unless there is really no need to. I hope this helps, YS.
rogervdkimmenade wrote: > I am looking for more pro-arguments, so you RESTarians can help me ?! If it works for you, and all the resources in your domain are addressable, then fine. You can move towards a more complete implementation of the style over time (just like you would when moving to objects). Remember that introducing excessive new technology is a risk in itself. The exception I see in your examples is using GET for destructive actions. That's a serious design flaw - when it happens and a user unintentionally destroys data you won't be able to rationalize why you shipped with it - the Rails community have tried to do that twice and failed miserably. You will be deemed incompetent if comes to loss of funds, or placing people in harms way. cheers Bill
Does this thread contain some confusion between the different kinds of
state in a Web app?
That is,
* resource state: state of a resource, known only by its
representations. If you PUT, you change the resource state.
* application state: ("hypermedia is the engine of...") described by
hyperlinks in representations of resources from the server, navigated
by the client.
* session state: in REST, known only to the client.
I think this thread so far has been about resource state, not application state.
Might be more kinds, and better explanations. I went looking for a
canonical explanation of the kinds and differences.
Anybody got a good reference?
> Does this thread contain some confusion between the different kinds of > state in a Web app? This is what I'm attempting to tug at. In the example, I'm intentionally trying to draw a close relationship between the resource state and the application state. From the original example, you could imagine that based on the legal state transitions on the resource (resource state), a client may want to gray-out (or not display) buttons that cause such transitions (application state). The original example shows a close link between the two. To make the original example more concrete, let's say R is an Engine. S1 is "Stopped", S2 is "Starting", S3 is "Stopping", and S4 is "Running". Assume the Engine is "Stopped". A client enables a Start button and a Stop button based on Engine state. A push of the Start button causes the client to issue an Engine state transition to "Running" from "Stopped". On the way to running, the Engine is "Starting". During the transition from "Starting" to "Running", we don't want the client to enable the Start or Stop buttons. While the Engine is "Running" we only want the client to enable the Stop button. From what I understand about "hypermedia is the engine of...", the server wants to give the client hints about where the next legal transitions are. So how exactly would one structure the Engine representation and URIs to best suite such ends?
"Jim Sievert" <james.sievert@...> writes:
> From what I understand about "hypermedia is the engine of...", the server
> wants to give the client hints about where the next legal transitions are.
> So how exactly would one structure the Engine representation and URIs to
> best suite such ends?
But hypermedia is not necessary for micro state modelling; consider:
/someresource
<html>
<body>
<select name="state">
<option value="S1" selected="yes"/>
<option value="S4"/>
</select>
</body>
</html>
This can communicate perfectly well what the other legal states are.
As was said before, there is no reason to expose the intermediate
states if they are not necessary in some way.
--
Nic Ferrier
http://www.tapsellferrier.co.uk
> From: Mike Dierken [mailto:dierken@...] > > What kind of makes sense is a Get to R returning something like: > > <R> > > <State>S1</State> > > <TransitionUri>R/S4</TransitionUri> > > </R> > > I don't think putting the state (or state identifier) of a resource into a > resource identifier is the right thing to do. Perhaps I'm misunderstanding what you're saying or what's being said here [1], but aren't your words in conflict with that message? "The essence of REST is to make the states of the protocol explicit and addressible by URIs. The current state of the protocol state machine is represented by the URI you just operated on and the state representation you retrieved. You change state by operating on the URI of the state you're moving to, making that your new state. A state's representation includes the links (arcs in the graph) to the other states that you can move to from the current state." My example was based on what I interpreted [1] to say. [1] http://pluralsight.com/blogs/tewald/archive/2007/04/26/46984.aspx
> /someresource > > <html> > <body> > <select name="state"> > <option value="S1" selected="yes"/> > <option value="S4"/> > </select> > </body> > </html> > > This can communicate perfectly well what the other legal states are. Perhaps to a browser-based UI. What about non-browser-based clients or machine-to-machine communications where a media such as XML would be more suitable?
"Jim Sievert" <james.sievert@...> writes: >> /someresource >> >> <html> >> <body> >> <select name="state"> >> <option value="S1" selected="yes"/> >> <option value="S4"/> >> </select> >> </body> >> </html> >> >> This can communicate perfectly well what the other legal states are. > > Perhaps to a browser-based UI. What about non-browser-based clients or > machine-to-machine communications where a media such as XML would be more > suitable? That looks like XML to me. What is there that a machine can't read? -- Nic Ferrier http://www.tapsellferrier.co.uk
"Jim Sievert" <james.sievert@...> writes: > Perhaps I'm misunderstanding what you're saying or what's being said here > [1], but aren't your words in conflict with that message? > > "The essence of REST is to make the states of the protocol > explicit and addressible by URIs. The current state of the protocol state > machine is represented by the URI you just operated on and the state > representation you retrieved. You change state by operating on the URI of > the state you're moving to, making that your new state. A state's > representation includes the links (arcs in the graph) to the other states > that you can move to from the current state." I am finding your example a little too abstract to be helpfull. However, I would say that the trouble is your state is too simple to be modelled by multiple resources and therefore hypermedia. It *looks* to me like a single resource that is just changed to reflect that state it's in at any one time. In other words, if your application has only one state variable than there isn't going to be much hypermedia. -- Nic Ferrier http://www.tapsellferrier.co.uk
On 5/23/07, Nic James Ferrier <nferrier@...> wrote: > "Jim Sievert" <james.sievert@...> writes: > > >> /someresource > >> > >> <html> > >> <body> > >> <select name="state"> > >> <option value="S1" selected="yes"/> > >> <option value="S4"/> > >> </select> > >> </body> > >> </html> > >> > >> This can communicate perfectly well what the other legal states are. > > > > Perhaps to a browser-based UI. What about non-browser-based clients or > > machine-to-machine communications where a media such as XML would be more > > suitable? > > That looks like XML to me. > > What is there that a machine can't read? > Just to restate what Nic is saying, and what I said above: Your hypermedia has to indicate to the client the allowed transitions. If the hypermedia is HTML then that's often a form. But your custom xml can have link elements too. Those and only those would be the allowed transitions.
"Hugh Winkler" <hughw@...> writes: > On 5/23/07, Nic James Ferrier <nferrier@...> wrote: >> "Jim Sievert" <james.sievert@...> writes: >> >> >> /someresource >> >> >> >> <html> >> >> <body> >> >> <select name="state"> >> >> <option value="S1" selected="yes"/> >> >> <option value="S4"/> >> >> </select> >> >> </body> >> >> </html> >> >> >> >> This can communicate perfectly well what the other legal states are. >> > >> > Perhaps to a browser-based UI. What about non-browser-based clients or >> > machine-to-machine communications where a media such as XML would be more >> > suitable? >> >> That looks like XML to me. >> >> What is there that a machine can't read? >> > > Just to restate what Nic is saying, and what I said above: Your > hypermedia has to indicate to the client the allowed transitions. If > the hypermedia is HTML then that's often a form. But your custom xml > can have link elements too. Those and only those would be the allowed > transitions. Just to restate what High is saying (heh!) the hypermedia can be so simple that there is only one state variable, as in this example. Well then the resource representation can use markup to represent the available state transitions. I mean, you could mark this up in fifty gazillion different ways to say the same thing, the HTML is just a convieniant lingua franca. Note that there isn't a FORM here... it's just a simple resource representation to say "this resource can contain one of these states". The client would explicitly have to understand that's what this representation means. That could be made better, as Hugh says, by using a FORM or through using links. But often it's not necessary because your client has to be coded to understand explicitly anyway. -- Nic Ferrier http://www.tapsellferrier.co.uk
Sounds good.
The returning list of links is good way to use hypertext for resource
discovery.
I still have grief about PUT being used for 'partial update'. POST would
work as well and the definition of POST more closely describes the
cache-ability of a partial update.
> -----Original Message-----
> From: rest-discuss@yahoogroups.com
> [mailto:rest-discuss@yahoogroups.com] On Behalf Of mike amundsen
> Sent: Tuesday, May 22, 2007 8:52 PM
> To: rest-discuss
> Subject: Re: [rest-discuss] On "Hypermedia is the engine of
> application state" and replacing RPC...
>
> mike:
>
> On 5/22/07, Mike Dierken <dierken@...> wrote:
> <snip>
> > I've seen people talking about portions of a document (the <State>
> > element) as if it were the resource, or was automatically
> addressable.
> > If it were addressable, the document should indicate that
> (which you
> > talk about as well).
> </snip>
>
> i was one who posted a sample that contained a 'partial'
> resource and am curious about following up on this.
>
> i've implemented a pattern that defines an xml resource
> document that contains several elements.
>
> the schema for POSTing that resource to the server (address =
> /objects/) has, say, three required elements (name,size, amount).
>
> a GET to the same address returns a list of links to available objects
> (/objects/{id}) that has a slightly different schema since
> the server can return some additional data in the requested resource.
>
> a PUT to the object address (/objects/{id}) can have one or
> more of the same three elements from the POST schema. in my
> case, any element that is not included in the PUT is left
> untouched in the store on the server.
>
> there's nothing in this description that breaks the REST
> model, correct?
>
>
>
> --
> mca
> "In a time of universal deceit, telling the truth becomes a
> revolutionary act. " (George Orwell)
>
>
>
> Yahoo! Groups Links
>
>
>
I think in your example, the move from S1 to S4 or S4 to S1 is not a simple state change, but a process. So I would not model it as a client PUTting S4 to the server, but instead, starting a process to move to S4.
then, a client POSTs to /R/Processes with 'MoveToS4'
ittay
Jim Sievert wrote on
05/23/07 05:46:
I’d like to investigate a couple REST aspects. Assume we have a Resource R. R can transition through a number of states. Let’s say the states are S1, S2, S3, and S4. Outside clients can only directly initiate transitions from S1 to S4 and from S4 to S1. As a result of initiating a transition from S1 to S4, R will pass through S2 on the way to S4. As a result of initiating a transition from S4 to S1, R will pass through S3 on the way to S1. A client can sample R at any time and see the state of R in any of the states S1, S2, S3, and S4.
Without being RPCish, I’m having a difficult time imagining how to structure URIs that allow a client to transition R from S1 to S4 and S4 to S1. For example I could imagine a GET to R returning something like:
S1
I can imagine a client performing a PUT to R/State, changing its value to S4. This doesn’t feel right because a subsequent GET might result in the client seeing
S2 . This might be confusing as the PUT returned good status, yet the client doesn’t see S4 in the State element.This also doesn’t feel right from the POV of “Hypermedia being the engine of application state”. There’s nothing in the representation that _guides_ the client to a URI for initiating a _valid_ state change. For example, there’s nothing that says to the client that the only valid thing you can do to R at this point is initiate a transition to S4.
What kind of makes sense is a Get to R returning something like:
S1 <TransitionUri>R/S4ransitionUri>
A client can then perform an “empty” PUT to R/S4. A Get of R might then look like:
S 2
Here, there is no TransitionUri element because no external transition capability exists while R is in S2. Upon reaching S4, a Get of R might look like:
S 4<TransitionUri>R/S1ransitionUri>
Is this on-track, or is there yet a “better” representation and URI structure?
Thanks.
Well, I for one am completely confused. I think the term 'state' is being used for two things in this thread - "state of a resource" and "application state". I've always thought of hyperlinks within hypermedia (whether explicit or generated via a form language) to indicate the available resources the /client/ can retrieve which result in the client /transitioning to a different state/. Also the hyperlinks - combined with a 'forms language' that describe both the operations and acceptable content - describe how to modify/create a resource which result in the set of resources /transitioning to a different state/. It's the combination of the client state and the set of resources which to me define the 'application state' of which we speak. But I'd really like to understand better how the real world uses hypermedia as an engine. > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Hugh Winkler > Sent: Wednesday, May 23, 2007 1:18 PM > To: Nic James Ferrier > Cc: james.sievert@...; rest-discuss > Subject: Re: [rest-discuss] On "Hypermedia is the engine of > application state" and replacing RPC... > > On 5/23/07, Nic James Ferrier <nferrier@...> wrote: > > "Jim Sievert" <james.sievert@...> writes: > > > > >> /someresource > > >> > > >> <html> > > >> <body> > > >> <select name="state"> > > >> <option value="S1" selected="yes"/> > > >> <option value="S4"/> > > >> </select> > > >> </body> > > >> </html> > > >> > > >> This can communicate perfectly well what the other legal > states are. > > > > > > Perhaps to a browser-based UI. What about > non-browser-based clients > > > or machine-to-machine communications where a media such > as XML would > > > be more suitable? > > > > That looks like XML to me. > > > > What is there that a machine can't read? > > > > Just to restate what Nic is saying, and what I said above: > Your hypermedia has to indicate to the client the allowed > transitions. If the hypermedia is HTML then that's often a > form. But your custom xml can have link elements too. Those > and only those would be the allowed transitions. > > > > Yahoo! Groups Links > > >
"Mike Dierken" <dierken@...> writes:
> But I'd really like to understand better how the real world uses hypermedia
> as an engine.
It's a sexy phrase. I've always understood it to mean that an
application can have state (what is it doing right now? what has it
just done?) but that you can use links and other href targets (forms
for example, xlinks, includes, etc...) to represent the state
transitions; eg:
from the invoices page you can go to:
customer-add page
or invoice-add page
But this is when you have more than just one variable.
In the example being talked about earlier there was only one variable
that could be in 4 states.
Now, I'm sure that you can do that with hypermedia if you want to but
I'm a simple bloke and I see that as just one resource. Even if it has
multiple states (ie: values) you can still describe it with a single
representation because you can describe all the possible state
transitions (or values) in the single representation.
I think what I'm saying is what Ockham said: don't needlessly
over complicate things (actually, it turns out he didn't say that).
--
Nic Ferrier
http://www.tapsellferrier.co.uk
> I think what I'm saying is what Ockham said: don't needlessly > over complicate things This too is my goal in originating this thread. My background is in device control. Standards like WS-Management are making headway into small devices. Equal capabilities can come at a lower cost (memory, processor power, etc.) by applying the principles of REST to these small devices. The only problem is that I'm not quite sure how the principles apply completely. Device control consists of simple state transitions in terms of the device itself (e.g. turn on/turn off). With REST, I'm limited in terms of the verbs used to control that device (GET, PUT, POST, etc.). That's fine. I can induce state changes through PUTs. Although it's a bit strange to change the device state to "On" when there are a bunch of intervening states/progress indicators that a client wants to see on the way to "On". This can be handled too using separate states, control vs. progress. I've considered this to be resource state management. I'm looking at where application state comes into play. I consider application state to be the client interactions with the device such that the device gives the client hints about what transitions are valid against the device while the device is in a given state. Does this provide more context?
"Jim Sievert" <james.sievert@...> writes: > I'm looking at where application state comes into play. I consider > application state to be the client interactions with the device such that > the device gives the client hints about what transitions are valid against > the device while the device is in a given state. > > Does this provide more context? Yes! It does. But I'm still having trouble thinking of specifics. Maybe you could give us a specific use case? I mean, the PUT to "on" sounds good... but you don't say what the interveening states are and why they need to be exposed. Are you going from "off" to "nearlly ready" to "on" for example? Is this a time frame/usability/feedback issue? etc... -- Nic Ferrier http://www.tapsellferrier.co.uk
> Maybe you could give us a specific use case? > > I mean, the PUT to "on" sounds good... but you don't say what the > interveening states are and why they need to be exposed. Are you going > from "off" to "nearlly ready" to "on" for example? Is this a time > frame/usability/feedback issue? etc... From a client POV you turn a device "on" or "off". From a client-side transition from "off" to "on", let's say a device can go from "off" to "loading microcode" to "initializing device" to "running confidence test" to "on". Let's also say that the intermediate states can take many seconds each to execute. Showing these states acts as a progress indicator to the client.
pmfji, but this article might be helpful as an example that looks close to your use cases: http://www.xml.com/pub/a/2005/04/06/restful.html I was struck by the use of 303 to deal with the various server-side states On 5/24/07, Nic James Ferrier <nferrier@...> wrote: > "Jim Sievert" <james.sievert@...> writes: > > > I'm looking at where application state comes into play. I consider > > application state to be the client interactions with the device such that > > the device gives the client hints about what transitions are valid against > > the device while the device is in a given state. > > > > Does this provide more context? > > Yes! It does. > > But I'm still having trouble thinking of specifics. > > Maybe you could give us a specific use case? > > I mean, the PUT to "on" sounds good... but you don't say what the > interveening states are and why they need to be exposed. Are you going > from "off" to "nearlly ready" to "on" for example? Is this a time > frame/usability/feedback issue? etc... > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
"Jim Sievert" <james.sievert@...> writes:
>> Maybe you could give us a specific use case?
>>
>> I mean, the PUT to "on" sounds good... but you don't say what the
>> interveening states are and why they need to be exposed. Are you going
>> from "off" to "nearlly ready" to "on" for example? Is this a time
>> frame/usability/feedback issue? etc...
>
> From a client POV you turn a device "on" or "off". From a client-side
> transition from "off" to "on", let's say a device can go from "off" to
> "loading microcode" to "initializing device" to "running confidence test" to
> "on". Let's also say that the intermediate states can take many seconds
> each to execute. Showing these states acts as a progress indicator to the
> client.
So that still sounds like just one state to me which you'd PUT to.
So I can imagine a situation like this:
GET /resource
=> 200
<doc>
<select name="status">
<option value="off" selected="true"/>
<option value="on"/>
</select>
</doc>
PUT /resource?status=on
=> 200
<doc>
<select name="status">
<option value="loading-microcode" selected="true"/>
<option value="off"/>
</select>
</doc>
10ms later
GET /resource
=> 200
<doc>
<select name="status">
<option value="init-device" selected="true"/>
<option value="off"/>
</select>
</doc>
20ms later
GET /resource
=> 200
<doc>
<select name="status">
<option value="on" selected="true"/>
<option value="off"/>
</select>
<a href="microcode"/>
<a href="blinking-lights"/>
</doc>
The final "on" resource shows some state transitions that would
presumably allow further GETs or PUTs to do more things.
--
Nic Ferrier
http://www.tapsellferrier.co.uk
> So I can imagine a situation like this: > > GET /resource > => 200 > <doc> > <select name="status"> > <option value="off" selected="true"/> > <option value="on"/> > </select> > </doc> > > > PUT /resource?status=on > => 200 > <doc> > <select name="status"> > <option value="loading-microcode" selected="true"/> > <option value="off"/> > </select> > </doc> Ah, the light just went on (no pun intended). I now get what you and Hugh are talking about. I was thrown by the HTML representation, but viewing it in light of the added context, it makes a lot of sense. I get resource state and application state bundled nicely into one concise chunk. Not only that, but by formulating the representation in HTML, I get operational semantics to boot. Potentially adding HTML forms might even bring more to the party... Thanks.
[ Attachment content not displayed ]
"Steve Loughran" <steve.loughran.soapbuilders@...> writes: > On 5/24/07, Jim Sievert <james.sievert@...> wrote: > >> > PUT /resource?status=on >> > => 200 >> > <doc> >> > <select name="status"> >> > <option value="loading-microcode" selected="true"/> >> > <option value="off"/> >> > </select> >> > </doc> > > Starting to look suspiciously like a WS-DM resource state message > there; just need an XML schema to describe it and you'd be done. More > dangerously, it assumes that the resource can immediately start a > synchronous state transition, which may or may not be the case. "more dangerously"... well if it couldn't you'd clearly have to code for that wouldn't you? All I'm saying is that this sort of thing is possible with RESTfull systems and not even that complicated. The client asks the resource to set it's status to "on"... presumably the resource can recieve that message and can start to reflect that fact, at least, that the transition has been requested. > In long-haul comms, when you put something into a state it begins an > async operation to enter that state, then either enters it or fails. > You can queue up state change requests if that is what you want. That's all we have here I think, unless I am missing something. The state reflected in the select could be "switching to on" for as long as it takes to confirm the next state (or whatever). -- Nic Ferrier http://www.tapsellferrier.co.uk
Hopefully this isn't a dumb question: Assuming that I can differentate between the two and send the appropriate response to the client based on the accepted encoding(s) - Is it ok to generate the same ETag for both deflated and gzipped content? Is it an absolute no-no? Or is it ok if I know and can avoid certain pitfalls? Thanks, Keyur
I'm on the verge of ordering this book. Any considered opinions? Cheers, Mark Humphries Manila, Philippines
I reviewed much of it months ago, it's pretty good (though the acknowledgements didn't mention I reviewed it). Very practical advice. Some of the "REST vs. SOA" stuff misses the mark, but it's not nearly as bad as most other pieces on the subject. Disclaimer; I was given a free copy. Mark. On 5/24/07, Mark W. Humphries <mwh@...> wrote: > I'm on the verge of ordering this book. Any considered opinions? > > Cheers, > Mark Humphries > Manila, Philippines > > > > > Yahoo! Groups Links > > > >
I thought it was a very good, practical introduction to the subject. A lot more comprehensive than most of the stuff that I've been able to locate and study online. I'd buy it again. ________________________________ From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Mark W. Humphries Sent: Thursday, May 24, 2007 3:18 PM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] O'Reilly RESTful Web Services Book I'm on the verge of ordering this book. Any considered opinions? Cheers, Mark Humphries Manila, Philippines
On 5/24/07, Mark W. Humphries <mwh@...> wrote: > > I'm on the verge of ordering this book. Any considered opinions? I am still waiting for O'Reilly to pull their finger out and deliver my order, so I can't comment directly but here is Jon Udell's commentary: http://blog.jonudell.net/2007/05/24/restful-web-services/ Regards, Alan Dean http://thoughtpad.net/alan-dean
On 5/24/07, Keyur Shah <keyurva@...> wrote: > > Hopefully this isn't a dumb question: > > Assuming that I can differentate between the two and send the > appropriate response to the client based on the accepted encoding(s) - > Is it ok to generate the same ETag for both deflated and gzipped content? The ETag should be based on the underlying resource state, unrelated to the specific representation MIME type or encoding. So the same resource rendered out to XML or JSON should have the same ETag, ditto if it is GZip or Deflate encoded (or unencoded). Regards, Alan Dean http://thoughtpad.net/alan-dean
Thanks, Alan. One more question - I am noticing that if I GZIP my response and set an ETag, IE 6 does not send an If-None-Match in subsequent requests to the same resource... It however does send the If-None-Match for deflated responses... Firefox of course sends If-None-Match as expected in both cases... Is this a known IE bug or am I doing something wrong? --Keyur http://abstractfinal.blogspot.com --- In rest-discuss@yahoogroups.com, "Alan Dean" <alan.dean@...> wrote: > > On 5/24/07, Keyur Shah <keyurva@...> wrote: > > > > Hopefully this isn't a dumb question: > > > > Assuming that I can differentate between the two and send the > > appropriate response to the client based on the accepted encoding(s) - > > Is it ok to generate the same ETag for both deflated and gzipped content? > > The ETag should be based on the underlying resource state, unrelated > to the specific representation MIME type or encoding. > > So the same resource rendered out to XML or JSON should have the same > ETag, ditto if it is GZip or Deflate encoded (or unencoded). > > Regards, > Alan Dean > http://thoughtpad.net/alan-dean >
Hello, I was wondering if there is some kind of standard way to describe a RESTful API? Has somebody done this already and can give examples? I myself have some ideas: * You can use UML class diagrams to describe the representations resources and/or schemas (but these are harder to read and can be generated from the diagram) * You can use UML sequence diagrams to describe dynamic behavior. So how the API is supposed to be used. * How do we describe the possible operations on resources? It is not always wanted to change resources or maybe custom errors must be returned. PS We use the RESTful protocol for inter-system communication, we do NOT use it for WorldWideWeb access. regards, Roger van de Kimmenade
Alan Dean wrote: > > > On 5/24/07, Keyur Shah <keyurva@yahoo. com <mailto:keyurva%40yahoo.com>> > wrote: > > > > Hopefully this isn't a dumb question: > > > > Assuming that I can differentate between the two and send the > > appropriate response to the client based on the accepted encoding(s) - > > Is it ok to generate the same ETag for both deflated and gzipped content? > > The ETag should be based on the underlying resource state, unrelated > to the specific representation MIME type or encoding. > > So the same resource rendered out to XML or JSON should have the same > ETag, ditto if it is GZip or Deflate encoded (or unencoded). Well no. It's a different variant, thus needs a different ETag, otherwise caches will be screwed up. Don't conflate Transfer-Encoding and Content-Encoding. Best regards, Julian
rogervdkimmenade wrote: > Hello, > > I was wondering if there is some kind of standard way to describe a > RESTful API? Has somebody done this already and can give examples? Whether the protocol is for internal or external consumption is moot; a protocol is a protocol is a protocol. There is no standard way, but there are exemplars you can follow. Good models are the Atom Publishing Protocol's provisional spec[1], and the example spec given in Paul James's article, 'A RESTful Web service, an example'[2]. K. [1] http://bitworking.org/projects/atom/ [2] http://peej.co.uk/articles/restfully-delicious.html -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
On 5/24/07, Nic James Ferrier <nferrier@...> wrote:
> "Steve Loughran" <steve.loughran.soapbuilders@...> writes:
>
> > On 5/24/07, Jim Sievert <james.sievert@...> wrote:
> >
> >> > PUT /resource?status=on
> >> > => 200
> >> > <doc>
> >> > <select name="status">
> >> > <option value="loading-microcode" selected="true"/>
> >> > <option value="off"/>
> >> > </select>
> >> > </doc>
> >
> > Starting to look suspiciously like a WS-DM resource state message
> > there; just need an XML schema to describe it and you'd be done. More
> > dangerously, it assumes that the resource can immediately start a
> > synchronous state transition, which may or may not be the case.
>
> "more dangerously"... well if it couldn't you'd clearly have to code
> for that wouldn't you?
as long as the first you were coding against was async, yes. is when
the first thing you work against is sync (or so fast as to make no
difference) and then in production you hit the slow stuff.
realistically, interaction styles that resemble synchronous method
calls are the mostly likely to lead to bad assumptions; the ones where
you go
toggle(devicename) {
Device d1=Lookup.find(deviceName);
d1.enter_state("on");
d1.ping();
d1.enter_state("off");
}
because what works for toggle("/home/lights") causes problems when you
go toggle("/ch/cern/colliders/LHC").
-steve
ou can queue up state change requests if that is what you want.
>
> That's all we have here I think, unless I am missing something. The
> state reflected in the select could be "switching to on" for as long
> as it takes to confirm the next state (or whatever).
>
I think the problem is not so much how to model it in REST, but how to
interact with things that take a long time to react. At least with
REST you can poll state, there's no need to wait for a WS-Notification
event that will only arrive if you are on the same network, the
firewalls let it through and your laptop doesnt roam during the
operation.
-steve
I also reviewed a chunk of this book a couple of months ago (I've already received my free copy). As Mark said, there were a few things I thought were a little off the mark, and there were some things I would have said differently, but those are minor quibbles as the book is full of useful examples and explanations of the practical application of REST in building software for the web. I'll be passing around my copy here at work and will probably buy a few extra copies so that I can make it required reading for people who work on projects with me. --Chuck On 5/24/07, Mark W. Humphries <mwh@...> wrote: > I'm on the verge of ordering this book. Any considered opinions? > > Cheers, > Mark Humphries > Manila, Philippines > > > > > Yahoo! Groups Links > > > >
> realistically, interaction styles that resemble synchronous method > calls are the mostly likely to lead to bad assumptions; the ones where > you go This is a really good point that is an undercurrent of this discussion. When you change a resource state like power=on or power=off using the REST principles, it's difficult to account for async. behavior without getting RPCish.
"Jim Sievert" <james.sievert@...> writes:
>> realistically, interaction styles that resemble synchronous method
>> calls are the mostly likely to lead to bad assumptions; the ones where
>> you go
>
> This is a really good point that is an undercurrent of this discussion.
>
> When you change a resource state like power=on or power=off using the REST
> principles, it's difficult to account for async. behavior without getting
> RPCish.
I don't understand this.
I thought the REST style system we were talking about was a good
solution precisely because it could expose the async states while also
exposing the legal state transitions.
For example, when the resource is in:
<select name="status">
<option value="loading-microcode"/>
<option value="off"/>
</select>
the possible state changes are obvious AND it shows you how the async
state of the resource.
What REST will NOT do for you is implement your polling. You have to
poll... but that's what was said earlier.
So, I am confused again. What is it REST is not doing for you?
--
Nic Ferrier
http://www.tapsellferrier.co.uk
> I thought the REST style system we were talking about was a good > solution precisely because it could expose the async states while also > exposing the legal state transitions. > > For example, when the resource is in: > > <select name="status"> > <option value="loading-microcode"/> > <option value="off"/> > </select> > > the possible state changes are obvious AND it shows you how the async > state of the resource. Yes, this works well assuming you understand the contract. Granted, it's hard to program without such understanding, but there's really nothing about the PUT status=on that a-priori conveys that the operation will be async. In RPC systems there are patterns of async. behavior that are present in the interface that are tip offs to the underlying nature of the command. As Steve hinted at, turning something like a light "on" is typically a synchronous notion. So setting a device to "on" might be construed the same way without having something in the initial contract to indicate that. I'm having some trouble coming up with words that in the REST style, convey a-priori asynchronous behavior.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 It's out on O'Reilly Safari two(?) days ago[1], so anyone with a safari account and start reading it online. [1] http://safari.oreilly.com/9780596529260 Mark W. Humphries wrote: > > > I'm on the verge of ordering this book. Any considered opinions? > > Cheers, > Mark Humphries > Manila, Philippines > - -- Zhang Yining URL: http://www.zhangyining.net | http://www.yining.org mailto: yining@... | zhang.yining@... Fingerprint: 25C8 47AE 30D5 4C0D A4BB 8CF2 3C2D 585F A905 F033 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGVlIJPC1YX6kF8DMRAqQ0AKCI/farnyN3C7wcqf04GCeRAARtggCcDMY4 WxNVSt/QOwu4RqUa5MigRhY= =q/wh -----END PGP SIGNATURE-----
"Jim Sievert" <james.sievert@...> writes: > As Steve hinted at, turning something like a light "on" is typically a > synchronous notion. So setting a device to "on" might be construed the same > way without having something in the initial contract to indicate that. > > I'm having some trouble coming up with words that in the REST style, convey > a-priori asynchronous behavior. The most normal way to deal with async behaviour in REST is to create a resource (POST to some place) and then poll it for it's state change. This might be appropriate in some circumstances that you're describing. However, I can't see that it matters whether it is async or not. The interface is the interface. You can't *break* it by mistaking how it works under the hood. In other words, the fact that the async or sync nature of the thing is hidden is a good thing isn't it? It's just implementation detail. -- Nic Ferrier http://www.tapsellferrier.co.uk
> The most normal way to deal with async behaviour in REST is to create > a resource (POST to some place) and then poll it for it's state > change. Okay, let's run with this. POST to the server, creating a "command" resource. It's parameterized to indicate PowerOn targeting the resource corresponding to the Device to which power will be applied. The command transitions from "Incomplete" to "Complete" over the course of its execution. This approach nicely separates the device status/state from the command state, but I was under the impression that such an approach was too borderline RPC to be considered RESTful.
On 5/25/07, Jim Sievert <james.sievert@...> wrote:
> > I thought the REST style system we were talking about was a good
> > solution precisely because it could expose the async states while also
> > exposing the legal state transitions.
> >
> > For example, when the resource is in:
> >
> > <select name="status">
> > <option value="loading-microcode"/>
> > <option value="off"/>
> > </select>
> >
> > the possible state changes are obvious AND it shows you how the async
> > state of the resource.
>
> Yes, this works well assuming you understand the contract.
your contract has to be in the hypermedia:
<form action="url" method="PUT">
<select name="status">
<option value="loading-microcode"/>
<option value="off"/>
</select>
</form>
>Granted, it's
> hard to program without such understanding, but there's really nothing about
> the PUT status=on that a-priori conveys that the operation will be async.
> In RPC systems there are patterns of async. behavior that are present in the
> interface that are tip offs to the underlying nature of the command.
>
> As Steve hinted at, turning something like a light "on" is typically a
> synchronous notion. So setting a device to "on" might be construed the same
> way without having something in the initial contract to indicate that.
>
> I'm having some trouble coming up with words that in the REST style, convey
> a-priori asynchronous behavior.
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
Hugh Winkler
Wellstorm Development
http://www.wellstorm.com/
+1 512 694 4795 mobile (preferred)
+1 512 264 3998 office
Is this the sort of thing you're looking for? http://www.google.com/search?q=WADL --Chuck On 5/25/07, rogervdkimmenade <rvdkimmenade@...> wrote: > Hello, > > I was wondering if there is some kind of standard way to describe a > RESTful API? Has somebody done this already and can give examples? > I myself have some ideas: > > * You can use UML class diagrams to describe the representations > resources and/or schemas (but these are harder to read and can be > generated from the diagram) > * You can use UML sequence diagrams to describe dynamic behavior. So > how the API is supposed to be used. > * How do we describe the possible operations on resources? > It is not always wanted to change resources or maybe custom errors > must be returned. > > PS We use the RESTful protocol for inter-system communication, we do > NOT use it for WorldWideWeb access. > > regards, > Roger van de Kimmenade > > > > > Yahoo! Groups Links > > > >
> > Yes, this works well assuming you understand the contract. > > your contract has to be in the hypermedia: Sorry. I wasn't quibbling about the actual representation. Call it poorly executed message snipping...
If you want to make it so posts can be retrieved by date and by ID, how do you make the URL's? The square brackets indicate that mm and dd are optional. Is this RESTful: /yyyy/mm/dd /yyyy/mm /yyyy or should it be: /yyyy-mm-dd /yyyy-mm /yyyy ? Clearly you can't mix a date and id based URL. So I'm thinking: /posts_by_date/yyyy.... and /posts/id How are you handling situations like this RESTfully? It seems that many of the web frameworks out there map RESTful URL's to code more comfortably if there's only one parameter (i.e. /posts_by_date/yyyy-mm-dd rather than /posts_by_date/yyyy/mm/dd), especially if the URL goes beyond that (i.e. /posts_by_date/yyyy-mm-dd/author/ID). What's the most RESTful way of doing this? Clues? :) Thanks, Scott
Scott Chapman <scott_list@...> writes:
> It seems that many of the web frameworks out there map RESTful URL's to code
> more comfortably if there's only one parameter (i.e. /posts_by_date/yyyy-mm-dd
> rather than /posts_by_date/yyyy/mm/dd), especially if the URL goes beyond that
> (i.e. /posts_by_date/yyyy-mm-dd/author/ID).
>
> What's the most RESTful way of doing this?
This is not really about REST but about URI composition.
As far as frameworks go, Django let's you map uris like this:
(r'^diary/(?P<month>[0-9]+)/<?<day>[0-9]$', 'cal.views.show')
the cal.views.show function will then be defined by you like this:
def show(request, month, day):
blah
this is just syntactic sugar though.
Surely you can just use a front ending function to break down the uri
and call other things?
It seems to me, unless you're heirarchy it built that way, that a date
based retrieval is an ideal application for search.
--
Nic Ferrier
http://www.tapsellferrier.co.uk
The bigger question that I'm wrestling with is, "How far do you take the mapping of complex queries to the RESTful URL paradigm?" I.e. if you have a query, "SELECT post_id FROM posts WHERE year(post_date) = 2007 and mont(post_date) = 4" how do you map that to RESTful URL's? This gets arbitrarily complex. REST doesn't look like it was made to do a full mapping of URL's to SQL. Scott Chapman wrote: > If you want to make it so posts can be retrieved by date and by ID, how do you > make the URL's? > > The square brackets indicate that mm and dd are optional. > > Is this RESTful: > /yyyy/mm/dd > /yyyy/mm > /yyyy > > or should it be: > /yyyy-mm-dd > /yyyy-mm > /yyyy > > ? > > Clearly you can't mix a date and id based URL. So I'm thinking: > > /posts_by_date/yyyy.... > and > /posts/id > > How are you handling situations like this RESTfully? > > It seems that many of the web frameworks out there map RESTful URL's to code > more comfortably if there's only one parameter (i.e. /posts_by_date/yyyy-mm-dd > rather than /posts_by_date/yyyy/mm/dd), especially if the URL goes beyond that > (i.e. /posts_by_date/yyyy-mm-dd/author/ID). > > What's the most RESTful way of doing this? > > Clues? :) > > Thanks, > Scott > >
Having little to do with REST but my strong preference is: /yyyy/mm/dd Instead of: /yyyy-mm-dd Because the former is hackable and the latter is not: /yyyy/mm/ /yyyy/ This also allows for relative references in content returned as per specs. So a link to "/yyyy/mm/foo/" could be referred to simply as "foo/" in the representation returned by "/yyyy/mm/". -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Scott Chapman > Sent: Friday, May 25, 2007 2:36 PM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] How to do /posts/yyyy[/mm[/dd]] and > /posts/id RESTfully? > > If you want to make it so posts can be retrieved by date and > by ID, how do you make the URL's? > > The square brackets indicate that mm and dd are optional. > > Is this RESTful: > /yyyy/mm/dd > /yyyy/mm > /yyyy > > or should it be: > /yyyy-mm-dd > /yyyy-mm > /yyyy > > ? > > Clearly you can't mix a date and id based URL. So I'm thinking: > > /posts_by_date/yyyy.... > and > /posts/id > > How are you handling situations like this RESTfully? > > It seems that many of the web frameworks out there map > RESTful URL's to code more comfortably if there's only one > parameter (i.e. /posts_by_date/yyyy-mm-dd rather than > /posts_by_date/yyyy/mm/dd), especially if the URL goes beyond > that (i.e. /posts_by_date/yyyy-mm-dd/author/ID). > > What's the most RESTful way of doing this? > > Clues? :) > > Thanks, > Scott > > > > Yahoo! Groups Links > > >
Scott Chapman <scott_list@...> writes: > The bigger question that I'm wrestling with is, "How far do you take the > mapping of complex queries to the RESTful URL paradigm?" I.e. if you have a > query, "SELECT post_id FROM posts WHERE year(post_date) = 2007 and > mont(post_date) = 4" how do you map that to RESTful URL's? This gets > arbitrarily complex. REST doesn't look like it was made to do a full mapping > of URL's to SQL. I'm affraid my answer is typically confucian: as far as it needs to go. REST is an application architectural style. You need to do what is right for the application. Don't try and make a perfect SQL to resources mapping just because it would be a perfect mapping. Instead, look at what the applications need are and then design the implementation with REST. Q: How many roads must a man walk down? A: 42 -- Nic Ferrier http://www.tapsellferrier.co.uk
Nic James Ferrier wrote:
>
>
> Scott Chapman <scott_list@... <mailto:scott_list%40mischko.com>>
> writes:
>
> > It seems that many of the web frameworks out there map RESTful URL's
> to code
> > more comfortably if there's only one parameter (i.e.
> /posts_by_date/yyyy-mm-dd
> > rather than /posts_by_date/yyyy/mm/dd), especially if the URL goes
> beyond that
> > (i.e. /posts_by_date/yyyy-mm-dd/author/ID).
> >
> > What's the most RESTful way of doing this?
>
> This is not really about REST but about URI composition.
>
> As far as frameworks go, Django let's you map uris like this:
>
> (r'^diary/(?P<month>[0-9]+)/<?<day>[0-9]$', 'cal.views.show')
cal_dict = {
'queryset': Calendar.objects.all(),
"extra_context": {},
}
urlpatterns += patterns('django.views.generic.date_based',
(r'^diary/?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\w{1,2})/(?P<slug>[-\w]+)/',
'object_detail',
dict(cal_dict, month_format="%m", slug_field='uid')),
...
)
which combines a date based archive with an arbitrary object field at
the end ('slug_field' above); that avoid the primary keys that other
frameworks use by default.
As a result I've gotten into a habit of sometimes giving objects uid
fields (which can be pumped into atom:ids as well). I seem to recall
Mike Koziarski thinking it was boneheaded to put arbitrary data like
that on an object, but the upside is that database autoincrements will
never burn me once I partition or have to re-import.
Django is one of the best framework available if you're serious about
this REST stuff. It does etags, cache control (+vary), compression,
access to headers, charset, without making a big deal out of any of it.
I've nothing but good things to say about a framework that provides
methods like get_object_or_404().
cheers
Bill
Scott Chapman wrote:
>
>
> The bigger question that I'm wrestling with is, "How far do you take the
> mapping of complex queries to the RESTful URL paradigm?" I.e. if you have a
> query, "SELECT post_id FROM posts WHERE year(post_date) = 2007 and
> mont(post_date) = 4" how do you map that to RESTful URL's? This gets
> arbitrarily complex. REST doesn't look like it was made to do a full
> mapping
> of URL's to SQL.
No, it's not; that's why people around here will not say that HTTP
methods map directly onto SQL CRUD. But "YYYY/MM/DD/{slug}" is such an
idiomatic URL pattern for permalinks, you might as well use it, with
"YYYY", "YYYY/MM" and "YYYY/MM/DD" for outputting date based
collections. If you need data ranges for reports then "?start=&end=" on
a GET query is fine.
cheers
Bill
Alan Dean wrote: > On 5/24/07, Keyur Shah <keyurva@...> wrote: >> Hopefully this isn't a dumb question: >> >> Assuming that I can differentate between the two and send the >> appropriate response to the client based on the accepted encoding(s) - >> Is it ok to generate the same ETag for both deflated and gzipped content? > > The ETag should be based on the underlying resource state, unrelated > to the specific representation MIME type or encoding. > > So the same resource rendered out to XML or JSON should have the same > ETag, ditto if it is GZip or Deflate encoded (or unencoded). No, if the difference is from content-coding, then absolutely not. "Entity tags are used for comparing two or more entities from the same requested resource." "A "strong entity tag" MAY be shared by two entities of a resource only if they are equivalent by octet equality." "A "weak entity tag," indicated by the "W/" prefix, MAY be shared by two entities of a resource only if the entities are equivalent and could be substituted for each other with no significant change in semantics." "An entity tag MUST be unique across all versions of all entities associated with a particular resource." It's not clear whether Keyur is talking about transfer-encoding or content-coding. In the case of content-coding (which in practice often proves problematic in a few other ways) then the e-tag for the gzipped version, the deflated version and the uncompressed version must not be the same as each other (or for that matter, the same as for any other version including historical version no longer available with the caveat that if they are semantically the same as a historical version - to the point that that historical version could be safely used in its place - then they could share a weak but not a strong e-tag). In the case of transfer-encoding this happens at a different conceptual layer to the e-tag, and so they can be shared.
Jim Sievert wrote: > but I was under the impression that such an approach was too > borderline RPC to be considered RESTful. That's perhaps a matter of thinking about "REST vs RPC" rather than thinking about REST.
On 5/25/07, Julian Reschke <julian.reschke@...> wrote: > > It's a different variant, thus needs a different ETag, otherwise caches > will be screwed up. On 5/25/07, Jon Hanna <jon@...> wrote: > > "A "strong entity tag" MAY be shared by two entities of a resource only > if they are equivalent by octet equality." My apologies, I stand corrected. Thanks guys. Alan Dean http://thoughtpad.net/alan-dean
[ Attachment content not displayed ]
I think DELETE often gets overlooked as being self-explanatory. But it doesn't have to be the same as 'rm'. Let's say I have a resource: http://example.org/foo.html My implementation of GET responds 200 OK with a representation of foo.html, unless foo.html is zero bytes, in which case the response is 410 Gone. My implementation of DELETE will respond 200 OK with the string: "Set to 410 Gone, DELETE again to remove." After setting foo.html to zero bytes, unless foo.html already was zero bytes, in which case foo.html really is deleted. This second DELETE will respond 204 No Content, subsequent GET requests will be 404 not 410. -Eric
Eric J. Bowman wrote: > > > I think DELETE often gets overlooked as being self-explanatory. But it > doesn't have to be the same as 'rm'. Let's say I have a resource: > > http://example. org/foo.html <http://example.org/foo.html> > > My implementation of GET responds 200 OK with a representation of foo.html, > unless foo.html is zero bytes, in which case the response is 410 Gone. > > My implementation of DELETE will respond 200 OK with the string: > > "Set to 410 Gone, DELETE again to remove." > > After setting foo.html to zero bytes, unless foo.html already was zero > bytes, in which case foo.html really is deleted. This second DELETE will > respond 204 No Content, subsequent GET requests will be 404 not 410. > > -Eric Hm. I don't think this is how 404 and 410 are meant to work. From <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.10.4.5>: "The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address. This status code is commonly used when the server does not wish to reveal exactly why the request has been refused, or when no other response is applicable." So my understanding is that 410 is a stronger variant of 404: "there's nothing here, and it's going to stay that way". Best regards, Julian
> So my understanding is that 410 is a stronger variant of 404: "there's > nothing here, and it's going to stay that way". > My interpretation would be: 404: no resource is located here 410: there used to be a resource here, but this is no more the case, permanently
Hi Benoit, > > The HTTP specification defines > * a small set of semantically defined methods ; > * and existing error codes. > Do we need a "RESTFul Web Services" description language ? > > And does it not contradict the hypermedia principle ? If i have a > global description of my service i will build URI and HTTP request > into discuss with the server instead of follow links in the > representations. I think the key thing is that "hypertext as the engine of state transition information". The goal is to use HTML (or XML) as the mechanism for specifying service navigation, as it already is for humans. I think WADL goes a bit overboard, but it is a useful starting point. A better solution (IMHO) would embed WADL-style information directly into the responses. -- Ernie P. On May 26, 2007, at 12:43 AM, Benot Fleury wrote: > Hi all, > > I profit that Chuck talk about WADL to ask if there is a place for > a description language in the REST architectural style ? > > One REST principle is the uniform interface. > > The HTTP specification defines > * a small set of semantically defined methods ; > * and existing error codes. > Do we need a "RESTFul Web Services" description language ? > > And does it not contradict the hypermedia principle ? If i have a > global description of my service i will build URI and HTTP request > into discuss with the server instead of follow links in the > representations. > > What do you think about ? > > -- benoit fleury > > > 2007/5/25, Chuck Hinson <chuck.hinson@...>: > Is this the sort of thing you're looking for? > > http://www.google.com/search?q=WADL > > --Chuck > > On 5/25/07, rogervdkimmenade <rvdkimmenade@...> wrote: > > Hello, > > > > I was wondering if there is some kind of standard way to describe a > > RESTful API? Has somebody done this already and can give examples? > > I myself have some ideas: > > > > * You can use UML class diagrams to describe the representations > > resources and/or schemas (but these are harder to read and can be > > generated from the diagram) > > * You can use UML sequence diagrams to describe dynamic behavior. So > > how the API is supposed to be used. > > * How do we describe the possible operations on resources? > > It is not always wanted to change resources or maybe custom errors > > must be returned. > > > > PS We use the RESTful protocol for inter-system communication, we do > > NOT use it for WorldWideWeb access. > > > > regards, > > Roger van de Kimmenade > > > > > > > > > > Yahoo! Groups Links > > > > > > > > > > > >
Jon,
I may be mixing incongruent pieces here but here's another question - If I set the Vary header as "Vary: Accept-Encoding" - is it then ok to set the same (weak) ETag?...
I need to understand transfer-encoding better... I am unfamiliar with how the semantics of transfer-encoding work. But thanks for the pointer.
-Keyur
Jon Hanna <jon@...> wrote: Alan Dean wrote:
> On 5/24/07, Keyur Shah wrote:
>> Hopefully this isn't a dumb question:
>>
>> Assuming that I can differentate between the two and send the
>> appropriate response to the client based on the accepted encoding(s) -
>> Is it ok to generate the same ETag for both deflated and gzipped content?
>
> The ETag should be based on the underlying resource state, unrelated
> to the specific representation MIME type or encoding.
>
> So the same resource rendered out to XML or JSON should have the same
> ETag, ditto if it is GZip or Deflate encoded (or unencoded).
No, if the difference is from content-coding, then absolutely not.
"Entity tags are used for comparing two or more entities from the same
requested resource."
"A "strong entity tag" MAY be shared by two entities of a resource only
if they are equivalent by octet equality."
"A "weak entity tag," indicated by the "W/" prefix, MAY be shared by two
entities of a resource only if the entities are equivalent and could be
substituted for each other with no significant change in semantics."
"An entity tag MUST be unique across all versions of all entities
associated with a particular resource."
It's not clear whether Keyur is talking about transfer-encoding or
content-coding. In the case of content-coding (which in practice often
proves problematic in a few other ways) then the e-tag for the gzipped
version, the deflated version and the uncompressed version must not be
the same as each other (or for that matter, the same as for any other
version including historical version no longer available with the caveat
that if they are semantically the same as a historical version - to the
point that that historical version could be safely used in its place -
then they could share a weak but not a strong e-tag).
In the case of transfer-encoding this happens at a different conceptual
layer to the e-tag, and so they can be shared.
---------------------------------
Be a better Heartthrob. Get better relationship answers from someone who knows.
Yahoo! Answers - Check it out. Assuming that I, as the server administrator, know the difference between a 404 and a 410 and desire to implement a 410 response on my server, then I could conceivably use a two-stage DELETE to toggle between 404 and 410 responses. Let's say I have a resource: http://example.org/foo.html My implementation of GET responds 200 OK with a representation of foo.html, unless foo.html is zero bytes, in which case the response is 410 Gone. My implementation of DELETE will respond 200 OK with the string: "Set to 410 Gone, DELETE again to remove." After setting foo.html to zero bytes, unless foo.html already was zero bytes, in which case foo.html really is deleted. This second DELETE will respond 204 No Content, subsequent GET requests will be 404 not 410. Of course, it is up to the server administrator to DELETE twice when a 410 Gone response would be inappropriate for the resource in question, to achieve the desired 404 response once this system is implemented. This is an example of how DELETE isn't always self-explanatory. -Eric
* Eric J. Bowman <eric@...> [2007-05-26 12:15]: > My implementation of DELETE will respond 200 OK with the > string: > > "Set to 410 Gone, DELETE again to remove." > > After setting foo.html to zero bytes, unless foo.html already > was zero bytes, in which case foo.html really is deleted. This > second DELETE will respond 204 No Content, subsequent GET > requests will be 404 not 410. You’re right in your interpretation of DELETE, but wrong about status 410. If an origin server says Gone, f.ex., intermediaries are free to cache this response because in contrast with 404, 410 is a strong assertion that this resource is gone and will remain gone, forever. You could respond 403 after the first DELETE. I know there is at least one AtomPP server implementation that moves Entries to a trash collection on first DELETE and responds 301 if you try to retrieve them at the deleted address; if you DELETE an Entry from the trash collection, it becomes 410. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 5/26/07, Eric J. Bowman <eric@...> wrote: > > "Set to 410 Gone, DELETE again to remove." It seems to me that this will mean that the UA is sending a DELETE to a URI that the origin server responds as Gone. Rather counter-intuitive. I'm not sure how intermediaries will treat the message set. Would a better alternative be the following? -> DELETE /foo <- 204 No Content -> GET /foo <- 410 Gone -> POST / Content-Type: application/x-www-form-urlencoded uri=/foo&status=404 <- 307 Temporary Redirect Location: /foo <html> <head>...</head> <body> <p><a href="/foo">/foo</a> has been set to 404 Not Found.</p> </body> </html> -> GET /foo <- 404 Not Found Regards, Alan Dean http://thoughtpad.net/alan-dean
* rogervdkimmenade <rvdkimmenade@...> [2007-05-25 09:20]: > I was wondering if there is some kind of standard way to > describe a RESTful API? This is a contradiction in terms. REST is defined by two properties: 1. Server state is exposed as a set of resources that have a uniform interface and are named by URIs. 2. The client doesn’t make any assumptions about the server URI space; all it does is follow links. #1 means there isn’t much to describe; there aren’t any functions or function signatures to document. And #2 means that what *is* there (the set of which resources exist and which verbs they respond to) *SHOULDN’T* be baked into the client. There is a sort of equivalent to a description of an RPC API for REST, but it looks very different from what you’d expect: It’s the media type specification. The key for REST lies in the fact that the client understands the representations the server returns, and can interpret them to find out which other resources the server offers and what they can be used for. F.ex., a browser understands HTML enough so that it knows that it can GET the URI given by `<a href>`; it is expected to (but may choose not to) GET the URI described by `<img src>`; and it can either GET or POST to the URI given by `<form action>`, with the HTML spec describing how to use the rest of the elements inside the `<form>` to either augment the URI or construct a POST body. The same thing is what you see when you look at the Atom Protocol: there is a description of a media type that explains where the client will find URIs in the representation returned by the server and what kind of things can be done with a URI depending on which place in the service document it was found. So this is how you describe a REST API: you explain the type of response body your server will send, where to find links in it, and what sort of verbs a particular link implies about the resource it points to, along with the semantics represented by the link in question. F.ex. `<cart href>` and `<item href>` may point to resources that respond to the same verbs, but the resources mean different things and if you can PUT or POST to them the request body might have to be constructed differently. The client needs to understand these difference to successfully operate the service. Notice how that doesn’t tell the client anything about how the URIs will look like – and that’s perfectly intentional. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> > I may be mixing incongruent pieces here but here's another question - If I > set the Vary header as "Vary: Accept-Encoding" - is it then ok to set the > same (weak) ETag?... > No, it's a good question and not the easiest to answer clearly. When you enable GZip, you double the number of representations available. Each variant now comes in two versions, inflated and deflated. After you decode the deflated version, it is bit-for-bit identical to the inflated version. So the ETag is the same either way, even if it's a weak validator. If an intermediary has cached foo.html in gzipped form, then a request comes in from a user-agent which doesn't support gzip, the intermediary is free to deflate and serve foo.html to the client if the server validates the request, because the only variance is the Content-Encoding request header. If you're sending a weak validator then the intermediary is free to deflate and serve foo.html to the client, even if the server has a more up-to-date, deflated version of foo.html available. -Eric
(apologies about breaking threading, yahoogroups and mailsnare don't mix, as a result my Yahoo ID keeps getting deleted so participating here is tough) > > It seems to me that this will mean that the UA is sending a DELETE to > a URI that the origin server responds as Gone. Rather > counter-intuitive. I'm not sure how intermediaries will treat the > message set. > > Would a better alternative be the following? > Right. Works in practice, but that tests only the network path between my workstation and my server. I don't want to require a VPN connection to DELETE, nor do I want to set 404 and 410 responses to uncacheable. What I've described has the advantage of being simple to implement on the server, the disadvantage is it's a bit bass-ackwards. So I'm looking for a simple, fass-rontwards alternative, or something besides no-cache to make what I have work. -Eric
On 5/26/07, A. Pagaltzis <pagaltzis@gmx.de> wrote: > > * rogervdkimmenade <rvdkimmenade@gmail.com> [2007-05-25 09:20]: > > I was wondering if there is some kind of standard way to > > describe a RESTful API? > > This is a contradiction in terms. REST is defined by two > properties: > > 1. Server state is exposed as a set of resources that have a > uniform interface and are named by URIs. > > 2. The client doesn't make any assumptions about the server URI > space; all it does is follow links. > > #1 means there isn't much to describe; there aren't any functions > or function signatures to document. And #2 means that what *is* > there (the set of which resources exist and which verbs they > respond to) *SHOULDN'T* be baked into the client. > > There is a sort of equivalent to a description of an RPC API for > REST, but it looks very different from what you'd expect: > > It's the media type specification. > > The key for REST lies in the fact that the client understands the > representations the server returns, and can interpret them to > find out which other resources the server offers and what they > can be used for. F.ex., a browser understands HTML enough so that > it knows that it can GET the URI given by `<a href>`; it is > expected to (but may choose not to) GET the URI described by > `<img src>`; and it can either GET or POST to the URI given by > `<form action>`, with the HTML spec describing how to use the > rest of the elements inside the `<form>` to either augment the > URI or construct a POST body. > > The same thing is what you see when you look at the Atom > Protocol: there is a description of a media type that explains > where the client will find URIs in the representation returned by > the server and what kind of things can be done with a URI > depending on which place in the service document it was found. > > So this is how you describe a REST API: you explain the type of > response body your server will send, where to find links in it, > and what sort of verbs a particular link implies about the > resource it points to, along with the semantics represented by > the link in question. F.ex. `<cart href>` and `<item href>` may > point to resources that respond to the same verbs, but the > resources mean different things and if you can PUT or POST to > them the request body might have to be constructed differently. > The client needs to understand these difference to successfully > operate the service. > > Notice how that doesn't tell the client anything about how the > URIs will look like – and that's perfectly intentional. Aristotle, I think that you are right on the money. I am putting together a new talk for the user community here in the UK and that has led me to think a lot about some of the myths and misconceptions surrounding REST. One of the big ones is that REST == CRUD. Another is that REST needs an equivalent to WSDL: "the WADL myth". Furthermore, I think that the two myths are linked. My suspicion is that the thinking goes as follows: Q. How do you explain REST to the uninitiated? A. Say that PUT=Create, GET=Read, POST=Update and DELETE=Delete Q. How do I then map CRUD onto a URI-space for a PowerPoint slide? A. Make the URI-space look like a database table (thus preserving the CRUD myth) e.g. /fruit/apples /fruit/oranges /fruit/pears Q. If we have such a human readable URI-space, then surely we can provide a description of the 'fruit service'? A. Sure, then we can make REST 'feel like' WS-* and thus be less threatening to REST newbies who know WS-*. Enter WADL or alternative. The problem is that the precepts for this thought pattern are both myths, thus the "Garbage In Garbage Out" principle applies. Now, I'm not claiming to be an expert on REST, and if *my* thinking is wrong then I am happy to stand corrected. Indeed, a big part of the reason why I watch and participate in this list is to try to continually improve my own thinking. However, I am coming to the conclusion that the centerpiece of my new talk needs to be a debunking of these myths and an explanation that the following URI-space could be equally as valid as the one above to describe fruit: /hghgdafdfyf/66sdgdbvcj/dyjtd6 /gfgfgfs444vcx/djdhjdh/ /gghya5tdgvcmlv/56576nd/fgfgf/ttryry and that the only way that a RESTful UA can traverse such a service is by dynamic discovery: no WS-style static discovery for us lemons! Regards, Alan Dean http://thoughtpad.net/alan-dean
> > Having little to do with REST but my strong preference is: > > /yyyy/mm/dd > > Instead of: > > /yyyy-mm-dd > > Because the former is hackable and the latter is not: > > /yyyy/mm/ > /yyyy/ > > This also allows for relative references in content returned as per specs. > So a link to "/yyyy/mm/foo/" could be referred to simply as "foo/" in the > representation returned by "/yyyy/mm/". > I'm not sure there's been enough information provided about the application in question to make any judgments or apply any preferences. Here's an example, where having a "hackable" URI scheme would be nonsensical: http://en.ericjbowman.com/date;transform=1?iso=2007-05-25 I say it does have an orthogonal relationship to REST, in that if my /date service had a hierarchical URI allocation scheme it would strongly imply a hierarchical organization of the information space which just isn't there. -Eric
On 5/26/07, Eric J. Bowman <eric@...> wrote: > > > > > I may be mixing incongruent pieces here but here's another question - If I > > set the Vary header as "Vary: Accept-Encoding" - is it then ok to set the > > same (weak) ETag?... > > > > No, it's a good question and not the easiest to answer clearly. ... > So the ETag is the same either way It is tricky! ;) See "Clarifying the Fundamentals of HTTP" by Jeff Mogul-- An entity tag must be assigned before the range selection. Otherwise, a client trying to assemble a full result from two or more ranges (in multiple messages) could not match the entity tags to test cache coherency. The specification must allow an entity tag to be assigned after the application of a content-coding, because it already allows the server to store its data in a pre-encoded form (and thus to require the entity tag to be assigned prior to any content coding would make all existing servers non-compliant). -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
On 5/27/07, Robert Sayre <sayrer@...> wrote: > > See "Clarifying the Fundamentals of HTTP" by Jeff Mogul The paper can be downloaded from: http://www2002.org/CDROM/refereed/444.pdf Regards, Alan Dean http://thoughtpad.net/alan-dean
> However, I am coming to the conclusion that the centerpiece > of my new talk needs to be a debunking of these myths and an > explanation that the following URI-space could be equally as > valid as the one above to describe fruit: > > /hghgdafdfyf/66sdgdbvcj/dyjtd6 > /gfgfgfs444vcx/djdhjdh/ > /gghya5tdgvcmlv/56576nd/fgfgf/ttryry > > and that the only way that a RESTful UA can traverse such a > service is by dynamic discovery: no WS-style static discovery > for us lemons! Sigh. This is the problem I discussed when I first started following the rest-discuss list; people's insistence on interpretting the URI opacity axiom to mean they should create obtuse URL structures. And in parrallel there has been complete resistence to some kind of well-known hypermedia document formats. The fact those well-known formats don't exists is why it's impossible for a vendor or an open-source project to create working and pre-tested code that can interoperate with solutions created by others. Having such a thing is in no way anti-REST simply because such a thing would be a layer on top of the REST architectural style. And the religious blowback whenever this has been brought up in the past has just baffled me. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
* Mike Schinkel <mikeschinkel@...> [2007-05-27 03:35]: > > However, I am coming to the conclusion that the centerpiece > > of my new talk needs to be a debunking of these myths and an > > explanation that the following URI-space could be equally as > > valid as the one above to describe fruit: > > > > /hghgdafdfyf/66sdgdbvcj/dyjtd6 > > /gfgfgfs444vcx/djdhjdh/ > > /gghya5tdgvcmlv/56576nd/fgfgf/ttryry > > > > and that the only way that a RESTful UA can traverse such a > > service is by dynamic discovery: no WS-style static discovery > > for us lemons! > > Sigh. > > This is the problem I discussed when I first started following > the rest-discuss list; people's insistence on interpretting the > URI opacity axiom to mean they should create obtuse URL > structures. No one said you *should* create obtuse URI spaces; the point is that the layout of your URI space has no effect on whether your application is RESTful. The crucial point is that the URI space is controlled by the server and the server alone; the client should not expect any particular layout. That doesn’t mean that you shouldn’t design your URIs carefully. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> No one said you *should* create obtuse URI spaces; the point > is that the layout of your URI space has no effect on whether > your application is RESTful. The crucial point is that the > URI space is controlled by the server and the server alone; > the client should not expect any particular layout. That > doesn't mean that you shouldn't design your URIs carefully. Agreed. But from past experience many people *interpret* that it means you should create obtuse URLs. It's called dogma, and is sadly appears around any 'religion.' This [1] should put it in perspective. '-) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us [1] http://archive.salon.com/comics/boll/2007/01/11/boll/index1.html
> This is the problem I discussed when I first started > following the rest-discuss list; people's insistence on > interpretting the URI opacity axiom to mean they should > create obtuse URL structures. I've never seen anyone suggest obtuse URL structures are desirable. They may use them to illustrate that there can be value even in the face of opacity. Opacity means that - for some uses - an obtuse URL or a human-readable URL are equivalent. If you want to create readable URLs go ahead, party on. > > And in parrallel there has been complete resistence to some > kind of well-known hypermedia document formats. Complete resistance to HTML? Or Web forms?
> > Agreed. But from past experience many people *interpret* that > it means you should create obtuse URLs. It's called dogma, > and is sadly appears around any 'religion.' This [1] should > put it in perspective. '-) Well, now that you know better you too can join the league of justice and help correct these wrongs.
> > This is the problem I discussed when I first started following the > > rest-discuss list; people's insistence on interpretting the URI > > opacity axiom to mean they should create obtuse URL structures. > I've never seen anyone suggest obtuse URL structures are > desirable. They may use them to illustrate that there can be > value even in the face of opacity. Just because you've not seen something doesn't mean it doesn't happen. I've been studying everything related to URLs for close to a year now, I have more than a foot high of printouts about URL design (so I could read offline and annotate) and believe me, there are numerous people who subscribe to that dogma. I'd dig up the numerous references for you but frankly don't have the energy to do so at the moment. > Opacity means that - for some uses - an obtuse URL or a > human-readable URL are equivalent. If you want to create > readable URLs go ahead, party on. I don't disagree, but that's misleading. Aristotle's definition was better: The crucial point is that the URI space is controlled by the server and the server alone; the client should not expect any particular layout. > > And in parrallel there has been complete resistence to some kind of > > well-known hypermedia document formats. > Complete resistance to HTML? Or Web forms? Resistence to defining (in essense) well known schemas or well known sets of microformats for use with web services. I had long and length discussions on that subject on this list about nine months ago. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
> > Agreed. But from past experience many people *interpret* > > that it means you should create obtuse URLs. It's called > > dogma, and is sadly appears around any 'religion.' This > > [1] should put it in perspective. '-) > Well, now that you know better you too can join the league of > justice and help correct these wrongs. So just exactly what do you propose? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
On 5/27/07, Mike Schinkel <mikeschinkel@...> wrote: > > Sigh. > > This is the problem I discussed when I first started following the > rest-discuss list; people's insistence on interpretting the URI opacity > axiom to mean they should create obtuse URL structures. To be clear: I am not advocating the use of an obtuse URL-space in production REST services. The phrase I used was: "... could be equally as valid ..." as an educational tool to change the mindset of the audience from REST == CRUD and URI-space == Database tables. As Aristotle implied in his original post, and as you repeat, the key to REST dynamic service discovery is document formatting. If the UA cannot parse the MIME type, then the URI-space cannot be traversed. It seems to me that the pre-eminent document formats at present that have sufficiently general support are: (X) HTML RDF (XML, N3, Turtle) possibly Atom, RSS possibly JSON There isn't much talk about Content Negotiation on the list, but I don't think that REST is complete without it. Or at least bcomes a pale shadow of itself. REST shouldn't advocate the use of a single MIME type either - otherwise we will end up repeating the WS-* world, where everyting is a SOAP message, and the SOAP spec becomes ever more convoluted. Regards, Alan Dean http://thoughtpad.net/alan-dean
> > There isn't much talk about Content Negotiation on the list, but I > don't think that REST is complete without it. Or at least bcomes a > pale shadow of itself. > Well, I'm hoping to change that. The /date service ought to be polished up this week, at which point I intend to release it open-source (by adding a link to a .zip file to the service document). I'm jumping the gun a touch here, but: [1] http://ericjbowman.com/date Is a language-negotiated resource (the 300 Multiple Choices functionality is the last bit, totally wrong atm), currently the service document is only available in English but the <form> takes you to: [2] http://en.ericjbowman.com/date?iso=2000-01-01 If the negotiation point had served the date.de.html file instead of the date.en.html file, the <form> would point to: [3] http://de.ericjbowman.com/date?iso=2000-01-01 I have the same question as this thread, just what value if any would there be for me to describe this service using WADL? Speaking of content negotiation, yes it certainly is one of the key features of REST, but unfortunately it is also the Achilles' heel of HTTP, IMHO. Which is why you can override the conneg watchdog on my system by distracting it with a (representational) cookie. ;-) Of course, the output at [3] is just an example, this is work-in-progress. Implements HEAD, GET, PUT and DELETE. You can check if any given year is a leap year, by making a HEAD request for Feb. 29, the boolean response is either 200 OK or 400 Bad Request, 0004 AD - 4000 AD. Writeup, source code (and /state service) coming sometime in June. -Eric
* Mike Schinkel <mikeschinkel@...> [2007-05-27 08:30]: > > > And in parrallel there has been complete resistence to some > > > kind of well-known hypermedia document formats. > > Complete resistance to HTML? Or Web forms? > > Resistence to defining (in essense) well known schemas or well > known sets of microformats for use with web services. I had > long and length discussions on that subject on this list about > nine months ago. That’s unfortunate. There are two possible extremes for REST: • One Media Type To Rule Them All (à la SOAP) • Every service comes up with its own media type Neither of these is healthy. The latter, in particular, gains you little over RPC in terms of decoupling, although you get some of the other benefits of REST. We need a middle ground: a small variety of somewhat generic media types that can be used for a wide variety of things. Individual services can then use one of them, and clients can then be implemented as glue on top of a library. That’s what I think is great about the AtomPP: it provides a base for a large array of services by defining a few base media types and a number of HTTP transactions complete with meanings, granular enough to be useful without customisation, but with clear extension hooks throughout. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> Thats unfortunate. There are two possible extremes for REST: > > One Media Type To Rule Them All ( la SOAP) Every service > comes up with its own media type > > Neither of these is healthy. The latter, in particular, gains > you little over RPC in terms of decoupling, although you get > some of the other benefits of REST. > > We need a middle ground: a small variety of somewhat generic > media types that can be used for a wide variety of things. > Individual services can then use one of them, and clients can > then be implemented as glue on top of a library. I agree with that complete. We could call this "Webful API" (to sidestep Roy's past concern about using the "REST" name.) Webful APIs could be defined by either new content types or a subset of existing content types, and each one would be defined by a string of information, vetted by working groups, and recorded at IANA. For example, assume we defined a Webful API for interacting with Events of the type that Eventful, UpComing, and Meetup allow you to schedule (btw, I need this.) It's identification string could be: "webful-api/events" Or "application/webful-api+xml/events" > Thats what I think is great about the AtomPP: it provides a > base for a large array of services by defining a few base > media types and a number of HTTP transactions complete with > meanings, granular enough to be useful without customisation, > but with clear extension hooks throughout. To me this is too close to "the one Media Type to rule them all" (what would we do w/o Tolkien?!?). AtomPP seems to me to be more of a well-defined conduit for implementing services than anything that would help identify specific services, but I haven't been following it closely enough to know for sure. So at the risk of having an opinion based on ignorance, I'd say that AtomPP would be a great base but we still need to be able to define Webful APIs for specific services, i.e. "application/atomserv+xml/events" "Event"s here would still be vetted by a working group somewhere and still need to be registered with IANA. How does that sit with you? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On 5/27/07, Mike Schinkel <mikeschinkel@gmail.com> wrote: > > > That's unfortunate. There are two possible extremes for REST: > > > > • One Media Type To Rule Them All (à la SOAP) • Every service > > comes up with its own media type > > > > Neither of these is healthy. The latter, in particular, gains > > you little over RPC in terms of decoupling, although you get > > some of the other benefits of REST. > > > > We need a middle ground: a small variety of somewhat generic > > media types that can be used for a wide variety of things. > > Individual services can then use one of them, and clients can > > then be implemented as glue on top of a library. > > I agree with that complete. > > We could call this "Webful API" (to sidestep Roy's past concern about using > the "REST" name.) Webful APIs could be defined by either new content types > or a subset of existing content types, and each one would be defined by a > string of information, vetted by working groups, and recorded at IANA. For > example, assume we defined a Webful API for interacting with Events of the > type that Eventful, UpComing, and Meetup allow you to schedule (btw, I need > this.) It's identification string could be: > > "webful-api/events" > > Or > > "application/webful-api+xml/events" > > > That's what I think is great about the AtomPP: it provides a > > base for a large array of services by defining a few base > > media types and a number of HTTP transactions complete with > > meanings, granular enough to be useful without customisation, > > but with clear extension hooks throughout. > > To me this is too close to "the one Media Type to rule them all" (what would > we do w/o Tolkien?!?). AtomPP seems to me to be more of a well-defined > conduit for implementing services than anything that would help identify > specific services, but I haven't been following it closely enough to know > for sure. > > So at the risk of having an opinion based on ignorance, I'd say that AtomPP > would be a great base but we still need to be able to define Webful APIs for > specific services, i.e. > > "application/atomserv+xml/events" > > "Event"s here would still be vetted by a working group somewhere and still > need to be registered with IANA. > > How does that sit with you? Agree +1 Rather than 'webful' - why not use 'hypermedia' as this is term employed by Roy? I, for one, would be happy to assist such an initiative. Regards, Alan Dean http://thoughtpad.net/alan-dean
> To be clear: I am not advocating the use of an obtuse > URL-space in production REST services. The phrase I used was: > > "... could be equally as valid ..." > > as an educational tool to change the mindset of the audience > from REST == CRUD and URI-space == Database tables. And to be clear, I was in no way trying to imply that you were advocating obtuseness in URL space. But I do know that many people interpret such statements as an advocacy for obtuseness and over time it becomes dogma among some people. I was just making the point that anyone reading what you say who might interpret it as advocacy would be mistaken. Again, I'll quote the same URL [1] that (hopefully) should make my point even clearer. :-) [1] http://archive.salon.com/comics/boll/2007/01/11/boll/index.html > There isn't much talk about Content Negotiation on the list, > but I don't think that REST is complete without it. Or at > least bcomes a pale shadow of itself. Why? > REST shouldn't advocate the use of a single MIME type either > - otherwise we will end up repeating the WS-* world, where > everyting is a SOAP message, and the SOAP spec becomes ever > more convoluted. I think I agree with you, but can you be more specific? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
It just occurred to me that I should clarify something. The "Webful API" concept is in no way a suggestion that all REST-based services should be required to use these content types, not at all! It is instead a suggestion to have a well-known set of media types that can be used for interoperable REST; the rest (no pun intended) would simply follow the REST architecture style as they have previously. BTW, having a set of well-known media types also means a set of well-known semantics for how the service should operate. These would be invaluable example case-studies so newbies dont have to go down all the wrong paths before they figure out the right path. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us > -----Original Message----- > From: Mike Schinkel [mailto:mikeschinkel@...] > Sent: Sunday, May 27, 2007 3:15 PM > To: 'A. Pagaltzis'; 'rest-discuss@...m' > Subject: RE: [rest-discuss] Re: REST API specification templates > > > Thats unfortunate. There are two possible extremes for REST: > > > > One Media Type To Rule Them All ( la SOAP) Every > service comes up > > with its own media type > > > > Neither of these is healthy. The latter, in particular, gains you > > little over RPC in terms of decoupling, although you get > some of the > > other benefits of REST. > > > > We need a middle ground: a small variety of somewhat generic media > > types that can be used for a wide variety of things. > > Individual services can then use one of them, and clients > can then be > > implemented as glue on top of a library. > > I agree with that complete. > > We could call this "Webful API" (to sidestep Roy's past > concern about using the "REST" name.) Webful APIs could be > defined by either new content types or a subset of existing > content types, and each one would be defined by a string of > information, vetted by working groups, and recorded at IANA. > For example, assume we defined a Webful API for interacting > with Events of the type that Eventful, UpComing, and Meetup > allow you to schedule (btw, I need this.) It's > identification string could be: > > "webful-api/events" > > Or > > "application/webful-api+xml/events" > > > > Thats what I think is great about the AtomPP: it provides > a base for > > a large array of services by defining a few base media types and a > > number of HTTP transactions complete with meanings, > granular enough to > > be useful without customisation, but with clear extension hooks > > throughout. > > To me this is too close to "the one Media Type to rule them > all" (what would we do w/o Tolkien?!?). AtomPP seems to me > to be more of a well-defined conduit for implementing > services than anything that would help identify specific > services, but I haven't been following it closely enough to > know for sure. > > So at the risk of having an opinion based on ignorance, I'd > say that AtomPP would be a great base but we still need to be > able to define Webful APIs for specific services, i.e. > > "application/atomserv+xml/events" > > "Event"s here would still be vetted by a working group > somewhere and still need to be registered with IANA. > > How does that sit with you? > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org - http://t.oolicio.us >
On 5/27/07, Mike Schinkel <mikeschinkel@...> wrote: > > And to be clear, I was in no way trying to imply that you were advocating > obtuseness in URL space. :-) Textual communication has far less suppleness than conversation. > But I do know that many people interpret such > statements as an advocacy for obtuseness and over time it becomes dogma > among some people. I was just making the point that anyone reading what you > say who might interpret it as advocacy would be mistaken. A fair enough point. > > > There isn't much talk about Content Negotiation on the list, > > but I don't think that REST is complete without it. Or at > > least bcomes a pale shadow of itself. > > Why? > > > REST shouldn't advocate the use of a single MIME type either > > - otherwise we will end up repeating the WS-* world, where > > everyting is a SOAP message, and the SOAP spec becomes ever > > more convoluted. > > I think I agree with you, but can you be more specific? One of the key constraints stipulated by Roy is the separation of concerns between client and server. Having a specialised message format would breach this constraint by coupling client and server. To avoid this fate, 'agnostic' MIME types are required - that is to say, MIME types that can be used generally for the type of application functionality. If you were to say "RESTful applications must use this specific MIME type" then I believe that you run the very real risk of "satisfying nobody by trying to please everybody". The more functionality that you try to embed into a format, then the more cumbersome and unwieldy it becomes - this is the fate of SOAP. What was originally intended to be Simple is now anything but. Better to establish a pattern for RESTful MIME types, and then provide formats on as as-needed basis that adhere to that pattern. For example, another constraint of REST as set out by Roy is that the client stores session state and that "each request from client to server must contain all the information necessary to understand the request, and cannot take advantage of any stored context on the server." This begs for a MIME type pattern, imho. Let's say that a RESTful MIME type for e-commerce baskets had been devised. The client can discover if the server supports said MIME type by simply pinging the server with: -> GET / Accept: application/basket If the server does not support the basket MIME type, it simply responds: <- 406 Not Acceptable If the server does support the basket MIME type, then it could return an empty basket, laden with the hyperlinks to the basket URI-space on the server: <- 200 OK Content-Type: application/basket <basket xml:lang="en"> <product href="/fruit">Fruit</product> </basket> Hope that helps illuminate what my thoughts are. Regards, Alan Dean http://thoughtpad.net/alan-dean
> One of the key constraints stipulated by Roy is the > separation of concerns between client and server. Having a > specialised message format would breach this constraint by > coupling client and server. To avoid this fate, 'agnostic' > MIME types are required - that is to say, MIME types that can > be used generally for the type of application functionality. Fair enough. As I think through it it seems that REST services may not have the same issues with content negotiation as the open web. In the case of the latter the user is given a URL but can't get the type they want because their browser is configured to prioritize another type higher and they don't know how to get around it. Or worse, one user sends another a URL and they see different things but the sender is unaware that will happen. Web services strike me as maybe not having those same issues. > If you were to say "RESTful applications must use this > specific MIME type" then I believe that you run the very real > risk of "satisfying nobody by trying to please everybody". See the other email I sent that crossed in the email that (I think) clarifies my thoughts on that. > Better to establish a pattern for RESTful MIME types, and > then provide formats on as as-needed basis that adhere to > that pattern. > > For example, another constraint of REST as set out by Roy is > that the client stores session state and that "each request > from client to server must contain all the information > necessary to understand the request, and cannot take > advantage of any stored context on the server." This begs for > a MIME type pattern, imho. Agreed +1 > Let's say that a RESTful MIME type for e-commerce baskets had > been devised. The client can discover if the server supports > said MIME type by simply pinging the server with: > > -> > GET / > Accept: application/basket > > If the server does not support the basket MIME type, it > simply responds: > > <- > 406 Not Acceptable > > If the server does support the basket MIME type, then it > could return an empty basket, laden with the hyperlinks to > the basket URI-space on the server: > > <- > 200 OK > Content-Type: application/basket > > <basket xml:lang="en"> > <product href="/fruit">Fruit</product> </basket> > > Hope that helps illuminate what my thoughts are. Woohoo! I like! -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
> I'd dig up the numerous references > for you but frankly don't have the energy to do so at the moment. Okay, I'll look forward to that. > I don't disagree, but that's misleading. Aristotle's > definition was better: > > The crucial point is that the URI space is > controlled by the server and the server > alone; the client should not expect any > particular layout. That Aristotle was one smart dude. Or maybe you are talking about a different Aristotle. When considering HTTP - and I believe REST as well - the client is a participant in the URI space. For example the PUT request in HTTP supports the client submitting the URI rather than ask the server to generate one. How that URI is determined is de-coupled from the ability for the client to submit new URI though. > > > > And in parrallel there has been complete resistence to some kind of > > > well-known hypermedia document formats. > > Complete resistance to HTML? Or Web forms? > > Resistence to defining (in essense) well known schemas or > well known sets of microformats for use with web services. I > had long and length discussions on that subject on this list > about nine months ago. I didn't pay too close of attention then. Were these format for /defining/ web services or for use /with/ web services?
You could write in your blog about when it is appropriate to use non-obtuse URLs and when it's appropriate to generate URLs on the client and what situations a well-structured URL helps with (relative references are one example). Also, when you meet people in person or on mailing lists that promote the idea that obtuse URLs are the one true way, you should point out counter examples. First they ignore you, then they laugh at you... > -----Original Message----- > From: Mike Schinkel [mailto:mikeschinkel@...] > Sent: Saturday, May 26, 2007 11:28 PM > To: 'Mike Dierken'; rest-discuss@yahoogroups.com > Subject: RE: [rest-discuss] Re: REST API specification templates > > > > Agreed. But from past experience many people *interpret* that it > > > means you should create obtuse URLs. It's called dogma, and is > > > sadly appears around any 'religion.' This [1] should put it in > > > perspective. '-) > > Well, now that you know better you too can join the league > of justice > > and help correct these wrongs. > > So just exactly what do you propose? > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org - http://t.oolicio.us "It never ceases > to amaze how many people will proactively debate away > attempts to improve the web..." > > >
Keyur Shah wrote: > I may be mixing incongruent pieces here but here's another question - If > I set the Vary header as "Vary: Accept-Encoding" - is it then ok to set > the same (weak) ETag?... Vary: Accept-Encoding means that the same URI might be served with different entities depending on the Accept-Encoding header. ETags talk about which entity was actually given. Different entity different entity-tag (hence the name). You might be wondering why we would need entity tags when we have the vary header. For one thing, the vary header doesn't always give us enough to know how to identify a given entity. Consider a response with Vary: Accept-Language; it might return the same entity for "en-US, en, fr" as for "en, fr" and it might not, so the values of Accept-Language isn't sufficient for a client or intermeditary to know that two responses will be the same. > I need to understand transfer-encoding better... I am unfamiliar with > how the semantics of transfer-encoding work. But thanks for the pointer. Transfer encoding is much the same as content-encoding, but it happens point-to-point rather than end-to-end. Conceptually the exact same entity is transferred whatever transfer encoding is used, but encodings are used in the physical transfer of those entities. Because of this issues about how something should be cached and how it should be saved are clearer with transfer encoding (once you receive it, decode it and then act as if there was no such thing as a transfer encoding). Robert Sayre wrote: > An entity tag must be assigned before the range selection. Otherwise, > a client trying to assemble a full result from two or more ranges (in > multiple messages) could not match the entity tags to test cache > coherency. And of course in this case it must not be a weak entity tag, because a change of as much as a single octet could mess up such re-assembly. > The specification must allow an entity tag to be assigned after the > application of a content-coding, because it already allows the server > to store its data in a pre-encoded form (and thus to require the > entity tag to be assigned prior to any content coding would make all > existing servers non-compliant). Implementation matters. When we receive an entity that's encoded we neither know nor care if it was pre-encoded, and for that matter if we receive an unencoded entity we don't know whether it was stored unencoded or stored encoded and decoded on the fly. All that matters is that if a client sees the same entity tag in a response for the same URI it can assume the entity is exactly the same octet-for-octet (if strong) or the same for most purposes (if weak).
Julian Reschke wrote: > So my understanding is that 410 is a stronger variant of 404: "there's > nothing here, and it's going to stay that way". I wouldn't be too firm on "it's going to stay that way". I would take it as "There was something here (404 doesn't say that something was there) it isn't there now, there are no plans to put something here again (404 could be a finger slip or a temporary state due purely to maintenace)". The spec says "This condition is expected to be considered permanent." There's an expectation that it's going to stay that way, but no guarantee. So, any references you currently have to the resource should be considered no longer valid, but you shouldn't cache the 410 response and on the basis of that cached information refuse to deal with any new references you receive since maybe the "expectation" behind the 410 was wrong and the new reference you receive to it could be based on information since that URI came back into use. In general, we can't assume anything is permanent on the web, and that applies to absenses as well as presences.
Mike Dierken wrote: > You could write in your blog about when it is appropriate to use non-obtuse > URLs To my mind it's always appropriate to use non-obtuse URIs. However, degrees of obtuseness are often in the eyes of the beholder. This is one of the reasons why we often get people worrying overmuch about which of several reasonable URI designs they should use. I don't think it's accurate to even discuss whether URIs should or should not be obtuse; rather URIs *are* obscure. You say http://www.ginger.com/stay_out_of_the_garbage and the routing code sees "blah blah Ginger blah blah", code mapping a path to a file sees "blah blah stay out of the garbage" while the code comparing uris with history or cache entries just sees "blah blah blah blah blah blah". > and when it's appropriate to generate URLs on the client and what I think this is only appropriate when a client has a degree of "ownership" over a portion of the URI space the server manages. The server is still the authority for URI mappings, but some clients can instruct the server in terms of how to exercise that authority.
> You could write in your blog about when it is appropriate to > use non-obtuse URLs and when it's appropriate to generate > URLs on the client and what situations a well-structured URL > helps with (relative references are one example). Two replies to that: 1.) Can I assume you haven't been reading my URL blog? (of course, I assume someone hasn't until they tell me otherwise.) There I have tried to explain why non-obtuse URLs are important. Maybe I haven't done the best job I could have, but that's what I've been doing for a while now. 2.) I agree with Jon Hanna: "It's always appropriate to use non-obtuse URLs" so I wouldn't ever be explaining when it *is* approriate. :) Interestingly my last post [1] was on this issue, and I've had quite an interesting comment debate going with one of the Seaside true believers. He said "I understand your position, but all of your arguments are based on the idea that REST is the only correct architecture, and that simply isn't true. REST is an architecture, one among many, and each has advantages and disadvantages. Every URL simply does not need to be restful." I'd actually greatly appreciate anyone and everyone on this list read the post and comments, and if you feel compelled, to add your voice as a comment, even if you disagree with me or feel I was being overly harsh. > Also, when you meet people in person or on mailing lists that > promote the idea that obtuse URLs are the one true way, you > should point out counter examples. Oh, I do. I do! :-) > First they ignore you, then they laugh at you... .. then they fight you, then you win. :-) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us [1] http://blog.welldesignedurls.org/2007/05/19/seeing-things-the-way-in-which-o ne-wants-them-to-be-not-the-way-they-are/
> > I'd dig up the numerous references > > for you but frankly don't have the energy to do so at the moment. > Okay, I'll look forward to that. Here [1] is one example (it's actually an interested thread and the one that lead me to [rest-discuss] back in Oct 2006 specifically because of URI Opacity dogma.) There were also examples searching my old emails where I replied to someone but they seem to be missing from the the [rest-discuss] archives. It's also like someone deleted them. Strange. I can forward them to you privately if you really care. > I didn't pay too close of attention then. Were these format > for /defining/ web services or for use /with/ web services? As with anything approaching religious fervor, it wasn't a reasoned debate as much as an ideological one. At least that's my memory of it. But who knows, I could have been the one with the ideology. ;-) Seriously though, I think it was more about defining but you'll have to clarify what you are asking for me to be sure. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us [1] http://microformats.org/discuss/mail/microformats-rest/2006-October/000294.h tml
On 5/27/07, Jon Hanna <jon@...> wrote: > > Implementation matters. When we receive an entity that's encoded we > neither know nor care if it was pre-encoded And yet clients make several assumptions about where that Etag assignment occurred. Just *how* it occurred is an implementation matter. I think the paper makes a different point than the one you're making. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Mike Schinkel wrote: > > > > No one said you *should* create obtuse URI spaces; the point > > is that the layout of your URI space has no effect on whether > > your application is RESTful. The crucial point is that the > > URI space is controlled by the server and the server alone; > > the client should not expect any particular layout. That > > doesn't mean that you shouldn't design your URIs carefully. > > Agreed. But from past experience many people *interpret* that it means you > should create obtuse URLs. It's called dogma, and is sadly appears around > any 'religion.' This [1] should put it in perspective. '-) People on this list have been dealing with that kind of strenuous pushback for over half a decade. Stick to your guns, you'll be fine :) cheers Bill
Mike Schinkel wrote: > > > Alan Dean: > > There isn't much talk about Content Negotiation on the list, > > but I don't think that REST is complete without it. Or at > > least bcomes a pale shadow of itself. > > Why? I think one reason is that without conneg, you end up providing a URI for each supported format, and URI proliferation is hardly a good thing. A few systems do that now; the Zimbra API would be one, moinmoin is another. Here's a simple example: xhtml: <http://www.citizensinformation.ie/categories/money-and-tax/tax/duties-and-vat/stamp-duty-on-financial-cards> atom: <http://www.citizensinformation.ie/categories/money-and-tax/tax/duties-and-vat/stamp-duty-on-financial-cards/entry.xml> I've said before I'm not sure what the right thing is here wrt to deployed web infrastructure, but I suspect conneg is a place where the de facto web architecture diverges significantly. cheers Bill
Robert Sayre wrote: > On 5/27/07, Jon Hanna <jon@...> wrote: >> >> Implementation matters. When we receive an entity that's encoded we >> neither know nor care if it was pre-encoded > > And yet clients make several assumptions about where that Etag > assignment occurred. Just *how* it occurred is an implementation > matter. I think the paper makes a different point than the one you're > making. It's more I think about it differently, in terms of "what" rather than "where". The only "where"s I think a client should think about is "somewhere under my control", "somewhere on an intermeditary" and "somewhere on the server". The first means within the client itself, so we aren't out onto the web yet and aren't dealing in REST. The other two are distinguished by the (imperfect) mechanism for knowing which headers are hop-to-hop and which are end-to-end. Beyond that they're black-boxes and the E-Tags could be magically appended by the E-Tag fairy for all the client knows. So in terms of "what" we come back to E-Tags tag entities, not resources (which we already have URIs to tag). While this ammounts to much the same thing as saying "where" I think it's clearer because it's talking about the sort of objects that it's a client's business to deal with. Further, a server-side process of "this is the E-Tag that will go with this resource, because it will only ever have one representation" or "this is the E-Tag that will go with this resource, unless I do some sort of transformation on it in which case I need a different one (perhaps based on that first E-Tag - an E-Tag of "1234" for the unencoded and "1234gz" for the g-zipped is perfectly valid) are both perfectly okay - they have the "where" wrong by that way of looking at it, but they still get the "what" right, and have the same effect as if the "where" was right.
Bill de hOra wrote: > I think one reason is that without conneg, you end up > providing a URI for each supported format, and URI > proliferation is hardly a good thing. Why is it bad, if it follows a pattern? IOW, here is one URL w/conneg: http://example.com/myphoto.img And here are others w/o: http://example.com/myphoto.gif http://example.com/myphoto.jpg http://example.com/myphoto.png Actually, I'd argue that you really need all four URLs, for different use-cases. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
* Hugh Winkler <hughw@...> [2007-05-23 05:35]: > Forms are an essential part of the web, but not necessarily of > the REST style. Why not? The key to REST is that the server sends the client a description of what resources the client can access, and in which way. There is nothing about forms that contradicts this principle. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Nic James Ferrier <nferrier@...> [2007-05-24 15:45]:
> So I can imagine a situation like this:
>
> GET /resource
> => 200
> <doc>
> <select name="status">
> <option value="off" selected="true"/>
> <option value="on"/>
> </select>
> </doc>
I’d be a little more explicit:
<form method="post">
<select name="status">
<option value="off" selected="true" disabled="disabled"/>
<option value="loading-microcode" disabled="disabled"/>
<option value="init-device" disabled="disabled"/>
<option value="on"/>
</select>
</form>
(Note I’m also opting for POST rather than PUT.)
At any stage, only the transitions which can legally be initiated
by the client aren’t disabled. So right after a
`POST /resource?status=on`, the representation becomes
<form method="post">
<select name="status">
<option value="off" disabled="disabled"/>
<option value="loading-microcode" selected="true" disabled="disabled"/>
<option value="init-device" disabled="disabled"/>
<option value="on" disabled="disabled"/>
</select>
</form>
which means the client can’t do anything at this time.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
Mike Schinkel wrote: > Bill de hOra wrote: >> I think one reason is that without conneg, you end up >> providing a URI for each supported format, and URI >> proliferation is hardly a good thing. > > Why is it bad, if it follows a pattern? Clients have to know the patterns. Patterns lack uniformity. There's an existing specified mechanism. Someone might might write ".jpeg" or "/jpeg". The server URLs need to be supported. The client code needs to be supported. Supporting relation schemes for embedding in representations are needed for linking to to avoid clients treating said patterns as APIs. Multiple such schemes are needed for each representation type. That could be tricky for non-textual media. Semwebbers get into description/denotation headaches and need to write a lot of owl:sameAs (actually it wouldn't be same:As). A shadow description system to the media type is created. There's a history of baking file names into URLs not working out well. These problems are compounded as a function of how well adopted such patterns become. URI templates are mostly fictional; they need to be extended to support media types. The eventual standard would probably look like content negotiation, but maybe in XML. All that said, for one's own site, file extensions will work. But it will be for one's site. Which brings us back around to URI opacity, and uniformity. cheers Bill
> > Why is it bad, if it follows a pattern? > > Clients have to know the patterns. Patterns lack uniformity. > There's an existing specified mechanism. Someone might might > write ".jpeg" or "/jpeg". The server URLs need to be > supported. The client code needs to be supported. Supporting > relation schemes for embedding in representations are needed > for linking to to avoid clients treating said patterns as > APIs. Multiple such schemes are needed for each > representation type. That could be tricky for non-textual media. > Semwebbers get into description/denotation headaches and need > to write a lot of owl:sameAs (actually it wouldn't be > same:As). A shadow description system to the media type is > created. There's a history of baking file names into URLs not > working out well. These problems are compounded as a function > of how well adopted such patterns become. URI templates are > mostly fictional; they need to be extended to support media > types. The eventual standard would probably look like content > negotiation, but maybe in XML. Thanks for answering. My knee-jerk reaction is that I want to debate you on this, but I'll defer to your considerably greater experience in this area, if only until some future date when I may revisit the debate. :) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Mike Schinkel wrote: > > > > > Why is it bad, if it follows a pattern? > > > > Clients have to know the patterns. Patterns lack uniformity. > > There's an existing specified mechanism. Someone might might > > write ".jpeg" or "/jpeg". The server URLs need to be > > supported. The client code needs to be supported. Supporting > > relation schemes for embedding in representations are needed > > for linking to to avoid clients treating said patterns as > > APIs. Multiple such schemes are needed for each > > representation type. That could be tricky for non-textual media. > > Semwebbers get into description/denotation headaches and need > > to write a lot of owl:sameAs (actually it wouldn't be > > same:As). A shadow description system to the media type is > > created. There's a history of baking file names into URLs not > > working out well. These problems are compounded as a function > > of how well adopted such patterns become. URI templates are > > mostly fictional; they need to be extended to support media > > types. The eventual standard would probably look like content > > negotiation, but maybe in XML. > > Thanks for answering. My knee-jerk reaction is that I want to debate you on > this, but I'll defer to your considerably greater experience in this area, > if only until some future date when I may revisit the debate. :) Mike, you'd probably win. I am highly conflicted on what to do around content-negotiation. Next month, I'll be arguing pro per-format URLs :\ cheers Bill
On May 28, 2007, at 6:04 AM, Mike Schinkel wrote: > [1] > http://blog.welldesignedurls.org/2007/05/19/seeing-things-the-way- > in-which-o > ne-wants-them-to-be-not-the-way-they-are/ OK, I'll bite: The argument you make here seems to be for hackable, human-readable URLs. I prefer them myself to meaningless, opaque strings. But I don't believe they're that important. Most importantly, though, I don't see at all what this has got to do with REST. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On 5/30/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > OK, I'll bite: The argument you make here seems to be for hackable, > human-readable URLs. I prefer them myself to meaningless, opaque > strings. But I don't believe they're that important. Most > importantly, though, I don't see at all what this has got to do with > REST. +1
About the same thing that picking good class names has to do with Object-Oriented Programming. --Chuck On 5/30/07, Stefan Tilkov <stefan.tilkov@...> wrote: > On May 28, 2007, at 6:04 AM, Mike Schinkel wrote: > > > [1] > > http://blog.welldesignedurls.org/2007/05/19/seeing-things-the-way- > > in-which-o > > ne-wants-them-to-be-not-the-way-they-are/ > > OK, I'll bite: The argument you make here seems to be for hackable, > human-readable URLs. I prefer them myself to meaningless, opaque > strings. But I don't believe they're that important. Most > importantly, though, I don't see at all what this has got to do with > REST. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > Yahoo! Groups Links > > > >
Not quite. OO systems target humans and therefore benefit from semantics encoded into names. REST targets machines and relies on discovery through contextual semantics (hypermedia). The former would crumble with arbitrary names, the latter doesn't care. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org On May 30, 2007, at 6:27 AM, Chuck Hinson wrote: > About the same thing that picking good class names has to do with > Object-Oriented Programming. > > --Chuck > > On 5/30/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > On May 28, 2007, at 6:04 AM, Mike Schinkel wrote: > > > > > [1] > > > http://blog.welldesignedurls.org/2007/05/19/seeing-things-the-way- > > > in-which-o > > > ne-wants-them-to-be-not-the-way-they-are/ > > > > OK, I'll bite: The argument you make here seems to be for hackable, > > human-readable URLs. I prefer them myself to meaningless, opaque > > strings. But I don't believe they're that important. Most > > importantly, though, I don't see at all what this has got to do with > > REST. > > > > Stefan > > -- > > Stefan Tilkov, http://www.innoq.com/blog/st/ > > > > > > > > Yahoo! Groups Links > > > > > > > > > >
Steve Bjorg wrote: > Not quite. OO systems target humans and therefore benefit from > semantics encoded into names. REST targets machines and relies on > discovery through contextual semantics (hypermedia). The former would > crumble with arbitrary names, the latter doesn't care. 1. Humans tend to come into the process at some point, even when the intention is that they don't. 2. Relative URI-reference syntax depends on a structure and relative URI syntax can be handy in creating the hypermedia used on REST systems. REST does not need meaningful URIs in the slightest. They are of zero value to REST qua REST. This does not mean that RESTful systems cannot *also* benefit from structured URI design. Aside from perhaps educational or rhetorical examples RESTful systems are not designed with a sole purpose of being RESTful systems - they generally have some goals beyond that. In those other goals, and in being easy to understand by developers, it can be useful to have structured and/or meaningful URIs. As long as the only authoritative source of information about those URIs remains the hypermedia it is not contrary to REST for us to obtain those benefits.
Jon Hanna wrote: > 1. Humans tend to come into the process at some point, even > when the intention is that they don't. > > 2. Relative URI-reference syntax depends on a structure and > relative URI syntax can be handy in creating the hypermedia > used on REST systems. > > REST does not need meaningful URIs in the slightest. They are > of zero value to REST qua REST. This does not mean that > RESTful systems cannot > *also* benefit from structured URI design. > > Aside from perhaps educational or rhetorical examples RESTful > systems are not designed with a sole purpose of being RESTful > systems - they generally have some goals beyond that. In > those other goals, and in being easy to understand by > developers, it can be useful to have structured and/or > meaningful URIs. > > As long as the only authoritative source of information about > those URIs remains the hypermedia it is not contrary to REST > for us to obtain those benefits. +1 -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
>> REST targets machines and relies on discovery through contextual semantics (hypermedia). I thought the web at large *was* REST, so more than just for machines. At least that's what Roy Fielding said about eight months back when I first asked for good examples of REST. But I agree with Jon Hanna, even URLs designed for machines still need to be handled and understood by humans if for no other reason than programming and debugging. So I assert, whenever possible, design your URLs as it can make your REST architecture less obtuse. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On 5/30/07, Jon Hanna <jon@...> wrote: > > Aside from perhaps educational or rhetorical examples RESTful systems > are not designed with a sole purpose of being RESTful systems Earlier in the thread I gave a hypothetical example for educational purposes, see http://tech.groups.yahoo.com/group/rest-discuss/message/8615 This was in order to illuminate some specific points about REST in my presentation. Alan
Mike Schinkel wrote: > Bill de hOra wrote: >> I think one reason is that without conneg, you end up >> providing a URI for each supported format, and URI >> proliferation is hardly a good thing. > > Why is it bad, if it follows a pattern? IOW, here is one URL w/conneg: > > http://example.com/myphoto.img > > And here are others w/o: > > http://example.com/myphoto.gif > http://example.com/myphoto.jpg > http://example.com/myphoto.png http://example.com/myphoto.gif may not be the same resource as http://example.com/myphoto.jpg. One would have to assume they were different resources unless told otherwise (by some mechanism). If they were the same resource one could merrily use either URI. This is not the case here. I think it's perfectly accept able to have http://example.com/doc return French and English versions based on conneg and to contain links like <a href="doc.en">English Version</a> <a href="doc.fr">Version Français</a> but: 1. That does not make the linking an alternative to con-neg. It solves a different problem. Con-neg solves the problem of getting a usable representation to the client. Explicit linking to alternatives solves the problem of getting a particular representation to a client that has a reason to want that particular representation through defining related resources which reflect those representations. 2. It is not applicable to the case of content-type unless all of the content-types are hypermedia formats. There is no way for .gif to indicate it has a .png alternative.
"Mike Schinkel" <mikeschinkel@...> writes: > I thought the web at large *was* REST, so more than just for machines. At > least that's what Roy Fielding said about eight months back when I first > asked for good examples of REST. Well... it's not all REST is it? I mean there are plenty of non-RESTfull sites out there which clearly can't be RESTfull. The thing is... those tend to be "applications". But the sites around them, the contact us page, the list of staff, etc... because they're just static files, they *are* RESTfull. So we can see that there is a fairly large test of REST working. > But I agree with Jon Hanna, even URLs designed for machines still need to be > handled and understood by humans if for no other reason than programming and > debugging. So I assert, whenever possible, design your URLs as it can make > your REST architecture less obtuse. Less obtuse to hackers at least. It's an important point I think. I don't beleive in the meaningfullness of URLs. We have no specifications for that. But I do believe in making them readable to hackers in the same way I believe in making my code readable. One other thought: we don't really need specifications to make urls usable either. Duck typing would work. There is already so much that could be done just by interpreting the things we can (pretty much) take for granted about url structure. -- Nic Ferrier http://www.tapsellferrier.co.uk
Nic James Ferrier wrote: > > "Mike Schinkel" <mikeschinkel@... > <mailto:mikeschinkel%40gmail.com>> writes: > > I thought the web at large *was* REST [...] > > Well... it's not all REST is it? I mean there are plenty of > non-RESTfull sites out there which clearly can't be RESTfull. > > . Web sites which 'clearly can't be restful?' I think this begs the question as to whether you can provide some examples... You mention that they 'tend to be applications' ... do you mean flash games and the like? Seems like a good time to point out a(nother) oft-neglected constraint of REST, Code-On-Demand: <http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_7>. Best, Elias
> http://example.com/myphoto.gif may not be the same resource > as http://example.com/myphoto.jpg. One would have to assume > they were different resources unless told otherwise (by some > mechanism). > > If they were the same resource one could merrily use either > URI. This is not the case here. Agreed. My discussion assumed a convention on the server that they represented the equivalent resource. > 1. That does not make the linking an alternative to con-neg. > It solves a different problem. Con-neg solves the problem of > getting a usable representation to the client. Explicit > linking to alternatives solves the problem of getting a > particular representation to a client that has a reason to > want that particular representation through defining related > resources which reflect those representations. Agree 100% > 2. It is not applicable to the case of content-type unless > all of the content-types are hypermedia formats. There is no > way for .gif to indicate it has a .png alternative. True, true, true (except, via HTTP headers. But that's another subject entirely... :) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Nic James Ferrier wrote: > > I thought the web at large *was* REST, so more > > than just for machines. At least that's what Roy > > Fielding said about eight months back when I > > first asked for good examples of REST. > > Well... it's not all REST is it? I mean there are plenty of > non-RESTfull sites out there which clearly can't be RESTfull. Hi Nic! I knew this discussion would eventually flush you out of the woodwork! ;-) > Less obtuse to hackers at least. Security by obscurity, eh? '-) > One other thought: we don't really need specifications to > make urls usable either. Duck typing would work. There is > already so much that could be done just by interpreting the > things we can (pretty much) take for granted about url structure. Not sure where you are headed. Please clarify, with examples if possible. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
> OK, I'll bite: The argument you make here seems to be for > hackable, human-readable URLs. I prefer them myself to > meaningless, opaque strings. But I don't believe they're that > important. Well I do believe there are "that important," so there! :-) But seriously, "that important" is so purely a subjective measure that we can stir up a perfectly rousing religious argument, you know the kind based on differing yet unstated values? Instead I will try to focus on the objective measures where we can hope to gain some progress based on achieving concensus! > Most importantly, though, I don't see at all what > this has got to do with REST. It doesn't directly. If you note I didn't use the term REST in the blog post at all. It was the commentor Ramon Leon who drug in REST by saying "If you're running an ecommerse site, you might want your products catalog behind a clean restful url structure, but you might not want or need your checkout process to be linkable" and then he went on to explain why REST wasn't important in many cases. I suggested he say so on this list but he demurred. So I was simply making a pointer to the discussion in hopes some RESTafarians would have even better arguments than me about why REST *is* important on the web. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us "It never ceases to amaze how many people will proactively debate away attempts to improve the web..."
"Mike Schinkel" <mikeschinkel@...> writes: >> Less obtuse to hackers at least. > > Security by obscurity, eh? '-) No. I was trying to get over the point about code readability. I'm talking about architectural readability. You could name URLs like this: service/person service/person/avatar service/person/events or you could name URLs like this: X76861386/2671668387123PPP X76861386/2671668387123PPP/hhh6 X76861386/2671668387123PPP/hhh8 there's no difference in terms of architecture EXCEPT the readability. The *precise* extent to which readability is useful is debatable. But we know that it is useful. >> One other thought: we don't really need specifications to >> make urls usable either. Duck typing would work. There is >> already so much that could be done just by interpreting the >> things we can (pretty much) take for granted about url structure. > > Not sure where you are headed. Please clarify, with examples if > possible. Well, we do know that this: service/person/avatar is a subresource of: service/person or, at least, we know to the point that we can cope with just doing something exceptional (an error message?) when we find out that's not the case. -- Nic Ferrier http://www.tapsellferrier.co.uk
Nic James Ferrier wrote: > there's no difference in terms of architecture EXCEPT the > readability. The *precise* extent to which readability is > useful is debatable. But we know that it is useful. Gotcha. > Well, we do know that this: > > service/person/avatar > > is a subresource of: > > service/person > > or, at least, we know to the point that we can cope with just > doing something exceptional (an error message?) when we find > out that's not the case. Ah. Agreed. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Elias Sinderson <elias@...> writes: > Web sites which 'clearly can't be restful?' I think this begs the > question as to whether you can provide some examples... You mention that > they 'tend to be applications' ... do you mean flash games and the > like? No. I don't mean that. Though that would be another example. I mean applications that don't conform at all to REST. There are quite a few. Hotmail for example? It's not very RESTfull is it? There are a lot of examples of non-RESTfull webapps, or at least webapps that are only dimly RESTfull. These examples tend to be apps that are trying to mimick a desktop app. > Seems like a good time to point out a(nother) oft-neglected constraint > of REST, Code-On-Demand: > <http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_7>. Yes. Quite right. I've been advocating it here for quite a while. I don't think this excuses pushing architectural behaviour into such code though. I may be mistaken but I don't think you'd find Roy advocating that either. What javascript and flash *are* useful for is user experience (AJAX driven apps etc...) and also things that the browser absolutely can't do such as reading from a camera. -- Nic Ferrier http://www.tapsellferrier.co.uk
On 5/30/07, Nic James Ferrier <nferrier@...> wrote: > > Elias Sinderson <elias@...> writes: > > > Web sites which 'clearly can't be restful?' I think this begs the > > question as to whether you can provide some examples... You mention that > > they 'tend to be applications' ... do you mean flash games and the > > like? > > No. I don't mean that. Though that would be another example. > > I mean applications that don't conform at all to REST. There are quite > a few. Hotmail for example? It's not very RESTfull is it? or gMail, or Yahoo! mail ... Virtually every e-commerce site ... It's probably easier to list the RESTful ones! > > There are a lot of examples of non-RESTfull webapps, or at least > webapps that are only dimly RESTfull. These examples tend to be apps > that are trying to mimick a desktop app. Not necessarily. It is relatively easy for a simple app to be (nearly / accidentally) RESTful (think twitter) but without deliberate effort, the more complexity ... the less RESTfulness. > > > Seems like a good time to point out a(nother) oft-neglected constraint > > of REST, Code-On-Demand: > > <http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_7>. > > Yes. Quite right. I've been advocating it here for quite a while. > > I don't think this excuses pushing architectural behaviour into such > code though. I may be mistaken but I don't think you'd find Roy > advocating that either. Let's not forget that this is stipulated as an optional constraint: "However, it also reduces visibility, and thus is only an optional constraint within REST." > > What javascript and flash *are* useful for is user experience (AJAX > driven apps etc...) and also things that the browser absolutely can't > do such as reading from a camera. > Agreed, but I fear that there is much injudicious use of AJAX. If you are Google, you can afford the overhead of maintaining gMail, but I can see many AJAX-driven sites becoming swiftly unmaintainable. I think that del.icio.us balances well between RESTfulness and AJAX usage. Regards, Alan Dean http://thoughtpad.net/alan-dean
Nic James Ferrier wrote: > Elias Sinderson <elias@...> writes: > >> Web sites which 'clearly can't be restful?' I think this begs the question as to whether you can provide some examples... You mention that they 'tend to be applications' ... do you mean flash games and the like? >> > > No. I don't mean that. Though that would be another example. > > I mean applications that don't conform at all to REST. There are quite a few. Hotmail for example? It's not very RESTfull is it? > > There are a lot of examples of non-RESTfull webapps, or at least > webapps that are only dimly RESTfull. These examples tend to be apps that are trying to mimick a desktop app. > Okay, sure, but let's distinguish between non-RESTful webapps and those which 'clearly can't be restful' ... examples of the former set are admittedly abundant, however I'm struggling to find examples of the latter. Put simply, unless you are truly perverting the use of HTTP (a-la SOAP, XML-RPC, etc.), and the WWW, *any* webapp is going to be at least somewhat RESTful. Agreed that they are perhaps only 'dimly RESTful but, again, a far cry from an inherent inability to be RESTful. An existence proof in the form of a URL would help ... but I think it would be difficult to deploy a web application, accessible via a browser, that /couldn't/ be RESTful at all. >> [...] a(nother) oft-neglected constraint of REST, Code-On-Demand: >> <http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_7>. >> > > Yes. Quite right. I've been advocating it here for quite a while. > > I don't think this excuses pushing architectural behaviour into such code though. I may be mistaken but I don't think you'd find Roy advocating that either. > > What javascript and flash *are* useful for is user experience (AJAX driven apps etc...) and also things that the browser absolutely can't do such as reading from a camera Respectfully, I tend to disagree -- all of a systems components should respect the broader system architecture. Javascript and AJAX are prime examples of the code-on-demand style where one should avoid undermining the principles of REST. The chain only being as strong as the weakest link, and all that. ... For example, what good is it for the page containing the Javascript to be cacheable, while all the niggling little AJAX requests are not? Cheerfully, Elias
Gentlemen, * Mike Schinkel <mikeschinkel@...> [2007-05-30 18:20]: > But I agree with Jon Hanna, even URLs designed for machines > still need to be handled and understood by humans if for no > other reason than programming and debugging. So I assert, > whenever possible, design your URLs as it can make your REST > architecture less obtuse. I think we’re in violent agreement here. :-) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
"Alan Dean" <alan.dean@...> writes: > Agreed, but I fear that there is much injudicious use of AJAX. If you > are Google, you can afford the overhead of maintaining gMail, but I > can see many AJAX-driven sites becoming swiftly unmaintainable. Interesting view. I think AJAX works best when you have a RESTfull description of how something holds together. In other words AJAX just helps integrate resources into views. But AJAX doesn't have to be used like that. It can be very session-y and effectively require continuations to understand it. -- Nic Ferrier http://www.tapsellferrier.co.uk
On 5/30/07, Nic James Ferrier <nferrier@...> wrote: > "Alan Dean" <alan.dean@...> writes: > > > Agreed, but I fear that there is much injudicious use of AJAX. If you > > are Google, you can afford the overhead of maintaining gMail, but I > > can see many AJAX-driven sites becoming swiftly unmaintainable. > > Interesting view. I think AJAX works best when you have a RESTfull > description of how something holds together. > > In other words AJAX just helps integrate resources into views. I would agree - done right. But isn't that always the catch? The hardest thing about REST isn't the architectural style itself, it isn't the URI-space: it is the decomposition of the domain resource-space. That's where I see people running straight into a wall. Perhaps my experience is worse because of the evils of ASP.NET (I work with Microsoft technologies professionally and I hate the WebForms abstraction model with a passion) which interferes with clear thinking by obscuring resource representation exposure via the URI-space. In any event, I believe that resource-space decomposition is fundamentally hard. Furthermore, unlike database developers in their work, we don't currently have good notation and toolsets to assist the process. Alan
On Wed, May 30, 2007 at 08:57:52PM +0100, Alan Dean wrote: > Let's not forget that this (code-on-demand) is stipulated as an > optional constraint: > "However, it also reduces visibility, and thus is only an optional > constraint within REST." What does "visibility" mean in this context? -- Paul Winkler http://www.slinkp.com
On 5/30/07, Paul Winkler <pw_lists@...> wrote: > > On Wed, May 30, 2007 at 08:57:52PM +0100, Alan Dean wrote: > > Let's not forget that this (code-on-demand) is stipulated as an > > optional constraint: > > "However, it also reduces visibility, and thus is only an optional > > constraint within REST." > > What does "visibility" mean in this context? My interpretation of what Roy meant by that is the visibility of two specified constraints within "Uniform Interface", namely "identification of resources" and "hypermedia as the engine of application state". I mentally visualize it by the opposite state of obscurity. A case in point: Google Maps. In Google Maps, you load some URL and can then browse around. If you had full visibility, the URL would keep on changing, but thanks to the ugliness of browser reloads Google swaps out images dynamically using AJAX. Thus the visibility is obscured because your address bar now shows the wrong URL for the current view. The "Link to the page" hyperlink just above the map actually does change, so in this case the loss of visibility is mitigated somewhat. Alan
Hi Paul, Visibility is all about how obvious and "visible" the data and interactions are between client and server. The closest opposite is obfuscated. If you look at the serialized Java objects that are shipped over HTTP in the DWR (and GWT?) implementations you should get a strong, visceral understanding ;) This is the first characteristic that I judge AJAX toolkits by. If the HTML/JSON/XML data is visible and the AJAX requests aren't tunnelling RPC, then the framework is using REST-inspired techniques of common data formats and exposing resources. Mobile code is dangerous because it makes it really easy to screw up the Visibility of the representations and resources. John Heintz On 5/30/07, Paul Winkler <pw_lists@...> wrote: > On Wed, May 30, 2007 at 08:57:52PM +0100, Alan Dean wrote: > > Let's not forget that this (code-on-demand) is stipulated as an > > optional constraint: > > "However, it also reduces visibility, and thus is only an optional > > constraint within REST." > > What does "visibility" mean in this context? > > -- > > Paul Winkler > http://www.slinkp.com > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
Whilst I know that many, if not most, of the readers of the list probably don't use Microsoft technologies - for those that do or are interested here is a podcast on Channel9: "A conversation with Justin Smith about syndication and REST in the Orcas release of Windows Communication Foundation" http://channel9.msdn.com/ShowPost.aspx?PostID=311356 "In version 3.5 of the .NET Framework, the Windows Communication Foundation will define a set of types that abstractly represent syndication feeds and items in feeds, and will provide mappings from those abstractions to RSS and Atom. In this conversation we discuss how this new support for syndication will work, and explore interesting scenarios for using it. We also discuss one of the underpinnings for syndication support in Orcas/WCF: a new ability to produce and consume services in a RESTful manner." Regards, Alan Dean http://thoughtpad.net/alan-dean
"Alan Dean" <alan.dean@...> writes: > In any event, I believe that resource-space decomposition is > fundamentally hard. it's not trivial. I wouldn't say it was hard exactly. There are only 2 hard things in computer science: cache entry invalidation and naming things. > Furthermore, unlike database developers in their work, we don't > currently have good notation and toolsets to assist the process. Agreed. -- Nic Ferrier http://www.tapsellferrier.co.uk
On Wed, May 30, 2007 at 04:20:19PM -0500, John D. Heintz wrote: > Hi Paul, > > Visibility is all about how obvious and "visible" the data and > interactions are between client and server. Ah, thanks. So, lack of visibility is the disease which accounts for some of my gripes with RPC ... eg. what I think of as "Useless Log Syndrome". I hate troubleshooting xmlrpc, jsonrpc, et al. because of this: 127.0.0.1 - Anonymous [15/May/2007:10:51:43 -0400] "POST /foo HTTP/1.1" 200 220 "" "fooclient" 127.0.0.1 - Bob [15/May/2007:10:51:43 -0400] "POST /foo HTTP/1.1" 200 321 "" "fooclient" 127.0.0.1 - Fred [15/May/2007:10:51:45 -0400] "POST /foo HTTP/1.1" 200 3257 "" "fooclient" ... What a gripping tale! Good thing we have millions of lines of this log archived for posterity. From this I can tell that Bob tried to do something unauthorized, Anonymous called a method that doesn't exist, and Fred successfully updated his profile. Oh wait, no I can't. That's not to say that application-specific logging isn't necessary, of course it is. But when your server gives you a perfectly useful and widely understood log out of the box, why throw all that information away? Never mind, you all know this already :-) -- Paul Winkler http://www.slinkp.com
> >it's not trivial. I wouldn't say it was hard exactly. > >There are only 2 hard things in computer science: cache entry >invalidation and naming things. > Well put. Which reminds me of an article I read recently: http://www.itworld.com/Tech/2327/nlsebiz070123/pfindex.html -Eric
On 5/30/07, Nic James Ferrier <nferrier@...> wrote: > "Alan Dean" <alan.dean@...> writes: > > > In any event, I believe that resource-space decomposition is > > fundamentally hard. > > it's not trivial. I wouldn't say it was hard exactly. Perhaps I should elaborate further. 1. Take a typical 'jobbing developer' supporting a typical e-commerce platform. 2. Ask him/her to decompose the resource-space of the application. 3. Watch the carnage. I don't mean to demean 'jobbing developers' at all - these are the people who keep all our favourite shopping sites up and running. My point is that their focus is *not* on resource abstractions - it is on user functionality (often small scale), middle-tier and back-end maintenance and extension (often against spaghetti code and within highly diffuse boundaries) using databases whose schemas are often ancient. For the typical reader of this list, who I expect is personally motivated by this problem domain and is likely well-read, I am sure that resource-space decomposition is a walk in the park ;-) but if REST is to fulfil it's promise then I personally believe that we need to win the hearts and minds of those 'jobbing developers'. Alan
Paul, Yes, that is an excellent example of lack of visibility. I see you already have experienced the visceral understanding of these things ;) John Heintz On 5/30/07, Paul Winkler <pw_lists@...> wrote: > On Wed, May 30, 2007 at 04:20:19PM -0500, John D. Heintz wrote: > > Hi Paul, > > > > Visibility is all about how obvious and "visible" the data and > > interactions are between client and server. > > Ah, thanks. So, lack of visibility is the disease which accounts for > some of my gripes with RPC ... eg. what I think of as "Useless Log > Syndrome". I hate troubleshooting xmlrpc, jsonrpc, et al. because of > this: > > 127.0.0.1 - Anonymous [15/May/2007:10:51:43 -0400] "POST /foo HTTP/1.1" 200 220 "" "fooclient" > 127.0.0.1 - Bob [15/May/2007:10:51:43 -0400] "POST /foo HTTP/1.1" 200 321 "" "fooclient" > 127.0.0.1 - Fred [15/May/2007:10:51:45 -0400] "POST /foo HTTP/1.1" 200 3257 "" "fooclient" > ... > > What a gripping tale! Good thing we have millions of lines of this log > archived for posterity. From this I can tell that Bob tried to do > something unauthorized, Anonymous called a method that doesn't exist, > and Fred successfully updated his profile. Oh wait, no I can't. > > That's not to say that application-specific logging isn't necessary, > of course it is. But when your server gives you a perfectly useful > and widely understood log out of the box, why throw all that > information away? > > Never mind, you all know this already :-) > > -- > > Paul Winkler > http://www.slinkp.com > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
On 5/30/07, Nic James Ferrier <nferrier@...> wrote: > "Alan Dean" <alan.dean@...> writes: > > > In any event, I believe that resource-space decomposition is > > fundamentally hard. > > it's not trivial. I wouldn't say it was hard exactly. > > There are only 2 hard things in computer science: cache entry > invalidation and naming things. Plus, resource-space decomposition is largely an exercise in naming and classification. Alan
>> >> > In any event, I believe that resource-space decomposition is >> > fundamentally hard. >> >> it's not trivial. I wouldn't say it was hard exactly. >> >> There are only 2 hard things in computer science: cache entry >> invalidation and naming things. > >Plus, resource-space decomposition is largely an exercise in naming >and classification. > Actually, I name my resources first, then design the resource space in terms of "how the website looks". I start with an identifier which may not be dereferenced (because I say it's only an identifier): [1] http://example.org/2007-05-30.1 I define this to mean the first resource created by the application on the given date. I can abstract this into a website in more than one way, though. First, by extrapolating the identifier into a human- friendly hierarchical resource space with hackable Cool URIs mixing names and numbers like so: [2] http://example.org/2007/may/30/1 Of course, I could also base my resource space on the content of the resource instead of the name of the resource. If "2007-05-30.1" is described by a representation in the Atom format containing hypertext markup which implements a <category> scheme for the site/service (whatever), then the contents of category tags found within those representations might constitute the resource space: [3] http://example.org/foo A representation of /foo may describe several named resources included by the server, so the names of the resources themselves may not be exposed. Or, they could be extrapolated out into webspace like this: [4] http://example.org/foo/2007-05-30.1 Of course, I could use any or all of [1] - [4] at any given point in time, or change abruptly from [4] to [2] without ever needing to re- name my resources. In my case I would use [1] as my Atom ID and store the hypermedia in an Atom Store. This imposes no restriction on the design of the resource space intended to be dereferenced from the Web, nor does it impose any restriction on the media type chosen for any or all of [2] - [4], and of course I could always devise a [5]: [5] waka://example.org/2012/may/30/1 -Eric
"Alan Dean" <alan.dean@...> writes: > I don't mean to demean 'jobbing developers' at all - these are the > people who keep all our favourite shopping sites up and running. My > point is that their focus is *not* on resource abstractions - it is on > user functionality (often small scale), middle-tier and back-end > maintenance and extension (often against spaghetti code and within > highly diffuse boundaries) using databases whose schemas are often > ancient. > > For the typical reader of this list, who I expect is personally > motivated by this problem domain and is likely well-read, I am sure > that resource-space decomposition is a walk in the park ;-) but if > REST is to fulfil it's promise then I personally believe that we need > to win the hearts and minds of those 'jobbing developers'. Gotcha. And it must be quiet hard or we'd have made it automatic by now. Understood. -- Nic Ferrier http://www.tapsellferrier.co.uk
"John D. Heintz" <jheintz@...> writes: > Mobile code is dangerous because it makes it really easy to screw up > the Visibility of the representations and resources. My view on this is that REST is primarily important to me for scaling reasons. If I make the AJAX bits of my site non-RESTfull then they're going to scale badly, like any other non-RESTfull stuff. If that's important I'll make sure it doesn't happen. -- Nic Ferrier http://www.tapsellferrier.co.uk
Scalability is a significant point, but I think that interoperability is also very impacted by visibility. The DWR framework ends up building serialized Java messages that get shipped over an RPC-like dispatcher. The visibility is awful, and I would never want to try to write Ruby, Python, C# or even Java!! code to use a DWR site as a web service. If interoperability isn't an issue, then scalability is definitely still a concern. John Heintz On 5/30/07, Nic James Ferrier <nferrier@...> wrote: > "John D. Heintz" <jheintz@...> writes: > > > Mobile code is dangerous because it makes it really easy to screw up > > the Visibility of the representations and resources. > > My view on this is that REST is primarily important to me for scaling > reasons. > > If I make the AJAX bits of my site non-RESTfull then they're going to > scale badly, like any other non-RESTfull stuff. > > If that's important I'll make sure it doesn't happen. > > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
"John D. Heintz" <jheintz@...> writes: > Scalability is a significant point, but I think that interoperability > is also very impacted by visibility. > > The DWR framework ends up building serialized Java messages that get > shipped over an RPC-like dispatcher. The visibility is awful, and I > would never want to try to write Ruby, Python, C# or even Java!! code > to use a DWR site as a web service. > > If interoperability isn't an issue, then scalability is definitely > still a concern. Yes! I don't want to denigrate interoperabilty! The point I was trying to make is that I might choose to use non-RESTfull AJAX because I didn't care at all about interoperatability for those particular bits of code. For example, in a chat application you might have an RPC like call to establish who is currently logged on in a chat room because it doesn't make sense to expose that outside of the context of the chat room (which is a moot point I accept). But then I also have to consider the scaling issue as well. -- Nic Ferrier http://www.tapsellferrier.co.uk
Hi Everybody, I've begun what I hope will be cool site for keeping track of REST-related news and blog posts. It's at http://rest.corank.com <http://rest.corank.com> . It's very default-looking right now, but I hope to jazz it up a little. (Any suggestions?) Personally, I've been doing web service-related software (mostly geospatial metadata, catalogs and portals) for a while now, and I think the ball is finally rolling for REST. Anyway, just hoping something like this might be useful... What do you think?? Thanks, Jason
"Jason" <jcupp10@...> writes: > Hi Everybody, > > I've begun what I hope will be cool site for keeping track of > REST-related news and blog posts. It's at http://rest.corank.com > <http://rest.corank.com> . It's very default-looking right now, but I > hope to jazz it up a little. (Any suggestions?) Personally, I've been > doing web service-related software (mostly geospatial metadata, catalogs > and portals) for a while now, and I think the ball is finally rolling > for REST. Anyway, just hoping something like this might be useful... > What do you think?? Bah! OpenID please! -- Nic Ferrier http://www.tapsellferrier.co.uk
Now, now, now... this isn't the OpenID alias last I checked. ;) - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org On May 30, 2007, at 5:29 PM, Nic James Ferrier wrote: > "Jason" <jcupp10@...> writes: > > > Hi Everybody, > > > > I've begun what I hope will be cool site for keeping track of > > REST-related news and blog posts. It's at http://rest.corank.com > > <http://rest.corank.com> . It's very default-looking right now, > but I > > hope to jazz it up a little. (Any suggestions?) Personally, I've > been > > doing web service-related software (mostly geospatial metadata, > catalogs > > and portals) for a while now, and I think the ball is finally > rolling > > for REST. Anyway, just hoping something like this might be useful... > > What do you think?? > > Bah! OpenID please! > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk > >
Elias Sinderson <elias@...> writes: > Okay, sure, but let's distinguish between non-RESTful webapps and those > which 'clearly can't be restful' ... examples of the former set are > admittedly abundant, however I'm struggling to find examples of the > latter. > > Put simply, unless you are truly perverting the use of HTTP (a-la SOAP, > XML-RPC, etc.), and the WWW, *any* webapp is going to be at least > somewhat RESTful. Agreed that they are perhaps only 'dimly RESTful but, > again, a far cry from an inherent inability to be RESTful. An existence > proof in the form of a URL would help ... but I think it would be > difficult to deploy a web application, accessible via a browser, that > /couldn't/ be RESTful at all. I think you're nitpicking. To be clear about my meaning: Hotmail is not RESTfull. It clearly is not RESTfull. There are bits around hotmail that are RESTfull, the homepage, the about page (though even that has got a wierd url) and so forth. But the application is not itself RESTfull. That's not to say that you couldn't build a RESTfull webmail app, indeed I have; or even that Hotmail couldn't be rewritten to be RESTfull. But then it would look pretty different to what it does now. > Respectfully, I tend to disagree -- all of a systems components should > respect the > broader system architecture. Javascript and AJAX are prime examples of > the code-on-demand style where one should avoid undermining the > principles of REST. The chain only being as strong as the weakest link, > and all that. ... For example, what good is it for the page containing > the Javascript to be cacheable, while all the niggling little AJAX > requests are not? But why are they not cacheable? There's no reason at all that the AJAX requests cannot be cacheable. As I write this I am working on a little bit of code to if-modified-since enable an AJAX accessed Django app. There is no reason that the resources called from AJAX in a browser shouldn't be subject to the same constraints as the rest (pun intended) of the app. -- Nic Ferrier http://www.tapsellferrier.co.uk
Nic James Ferrier wrote > But why are they not cacheable? There's no reason at all that > the AJAX requests cannot be cacheable. As I write this I am > working on a little bit of code to if-modified-since enable > an AJAX accessed Django app. > > There is no reason that the resources called from AJAX in a > browser shouldn't be subject to the same constraints as the rest (pun > intended) of the app. +1 -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Nic James Ferrier wrote: > Elias Sinderson <elias@...> writes: > >> [...] unless you are truly perverting the use of HTTP (a-la SOAP, XML-RPC, etc.), and the WWW, *any* webapp is going to be at least somewhat RESTful. [...] >> > > I think you're nitpicking. To be clear about my meaning: > Hotmail is not RESTfull. It clearly is not RESTfull. > Well, one persons' nit is another persons' ... ;) Anyway, it would seem that we're mostly in agreement. My only issue was with the contention that there are web apps that 'clearly can't be RESTful'. True, there are a number of really bad, unRESTful, web apps out there but /not/ doesn't imply /can't/ in this context. As quoted elsewhere, "The Web is REST. REST is the Web." [1] > >> [...] all of a systems components should respect the >> broader system architecture. >> > > There is no reason that the resources called from AJAX in a browser shouldn't be subject to the same constraints as the rest (pun intended) of the app. +1, truly. It is a shame to see otherwise ... Regards, Elias [1] <http://tinyurl.com/2hmtsx>
--- In rest-discuss@yahoogroups.com <mailto:rest-discuss@yahoogroups.com> , Nic James Ferrier <nferrier@...> wrote: > Bah! OpenID please! The people at coRank are cool, but not *that* cool. Sorry no OpenID. I suppose a Digg-like site is suppose to be about what's new and popular, but there are so many great REST articles written many years ago -- there's really a deep history to be discovered by newcomers. I'll probably stick them in too... The REST articles I enjoy reading the most are the ones that show a table with URL or URL patterns with an HTTP verb and an explaination about what kind of resources the URL is suppose to refer to. Such good practical stuff for both programmers and non-programmers to start to understand REST. If you can construct good URIs by learning from examples, then you're a long way there. Then there's how HTTP is used to traverse URL links and deliver content via HTML and RSS. And then deeper material on the esoteria of resource, representation, content negotiation and URI opacity. Everytime I see a new flashy Web 2.0 website, I look at its URIs to see how RESTful it is -- "cool" URIs are definately here to stay. And I also wonder what XHTML 2 will bring with its ability to use more of HTTP in forms, for example... Lots to think about... and Post to http://rest.corank.com <http://rest.corank.com> ! - Jason
Eric J. Bowman wrote: > ... First, by extrapolating the identifier into a human- > friendly hierarchical resource space with hackable Cool URIs mixing > names and numbers like so: > > [2] http://example.org/2007/may/30/1 I think the point about Cool URIs is that they continue to reference the same thing over time, or failing that provide a mechanism (e.g. redirect) for discovering that same thing. It doesn't have anything to do with readability.
On 5/31/07, Jason <jcupp10@...> wrote: > > Hi Everybody, > > I've begun what I hope will be cool site for keeping track of REST-related news and blog posts. It's at http://rest.corank.com . It's very default-looking right now, but I hope to jazz it up a little. (Any suggestions?) Personally, I've been doing web service-related software (mostly geospatial metadata, catalogs and portals) for a while now, and I think the ball is finally rolling for REST. Anyway, just hoping something like this might be useful... What do you think?? Subscribed, but not joined (at least for the present)
Nic James Ferrier wrote: >> Seems like a good time to point out a(nother) oft-neglected constraint >> of REST, Code-On-Demand: >> <http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_7>. > > Yes. Quite right. I've been advocating it here for quite a while. > > > I don't think this excuses pushing architectural behaviour into such > code though. I may be mistaken but I don't think you'd find Roy > advocating that either. Roy may or may not. I would but with a caveat. I'm quite happy to bung any manner of stuff into such code. I'm happy to have a few URI-deducing tricks I frown on generally in such code (since the server is still in charge, being the origin of the code so in a way the code is a sort of hypermedia and a valid source of URIs) and I'm happy to have such code handle quite heavy architectural matters. The caveat though is that COD has a special status in REST generally and the web in particular - you can't depend upon it working. You can build an app and say "must support javascript" and that may work for the cases you are dealing with (certainly when dealing with legacy systems one's often in the position of being able to say "well, they wouldn't have gotten this far if they didn't have javascript"!) but if universality is a goal then you have to be able to do the same things in other ways or abandon some of your functionality.
A. Pagaltzis wrote: > Gentlemen, > > * Mike Schinkel <mikeschinkel@...> [2007-05-30 18:20]: >> But I agree with Jon Hanna, even URLs designed for machines >> still need to be handled and understood by humans if for no >> other reason than programming and debugging. So I assert, >> whenever possible, design your URLs as it can make your REST >> architecture less obtuse. > > I think we’re in violent agreement here. :-) Okay, what advantages are there in having a URI that can't be read by humans and isn't amenable to relative references? (Saying you don't need them doesn't count - I already agree there, I'm just saying it's good to have them all the same - I'm looking for actual disadvantages in readable URIs).
Nic James Ferrier wrote: > "Alan Dean" <alan.dean@...> writes: > >> Agreed, but I fear that there is much injudicious use of AJAX. If you >> are Google, you can afford the overhead of maintaining gMail, but I >> can see many AJAX-driven sites becoming swiftly unmaintainable. > > Interesting view. I think AJAX works best when you have a RESTfull > description of how something holds together. > > In other words AJAX just helps integrate resources into views. Indeed RESTful AJAX can be used to do a few things that could only previously be done with session state.
Jon Hanna <jon@...> writes: > The caveat though is that COD has a special status in REST generally and > the web in particular - you can't depend upon it working. You can build > an app and say "must support javascript" and that may work for the cases > you are dealing with (certainly when dealing with legacy systems one's > often in the position of being able to say "well, they wouldn't have > gotten this far if they didn't have javascript"!) but if universality is > a goal then you have to be able to do the same things in other ways or > abandon some of your functionality. Yep. I am trying (and clearly failing) to say that AJAX is fine as long as it sits on top of RESTfull architecture so tha app can continue to work nicely without the code running (and is accessible to machines [curl] etc...) And again, I should reiterate that COD is also useful for things that the web can't do: video to video flash app anyone? music player? -- Nic Ferrier http://www.tapsellferrier.co.uk
On 5/31/07, Jon Hanna <jon@...> wrote: > > (Saying you don't need them doesn't count - I already agree there, I'm > just saying it's good to have them all the same - I'm looking for actual > disadvantages in readable URIs). I think tha the disadvantages are in the same arena as the advantages: namely in understanding and thinking. I see a real risk that people will think that "RESTful URI-space == Database table" along with thinking that "REST == CRUD". It's just such an easy trap to fall into. I discussed this in more depth in an earlier post on this thread: http://tech.groups.yahoo.com/group/rest-discuss/message/8615 (To avoid a repitition of other previous posts - I am not trying to imply that production URIs should be obscure, but that when discussing and promoting REST we ought to make URI transparency a distinct topic requiring explanation to mitigate 'bad thinking') Regards, Alan Dean http://thoughtpad.net/alan-dean
Nothing new for folks around here, probably, but still: http://www.infoq.com/news/2007/05/is-rest-winning Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On May 31, 2007, at 12:14 PM, Jon Hanna wrote: > Okay, what advantages are there in having a URI that can't be read by > humans and isn't amenable to relative references? No-one will try to build them, manually or programatically, according to some real or imagined recipe - which means that their very unreadability forces you to rely on something else (hypermedia, hopefully). It's also less likely that someone will build meaning into URIs that you don't want to have there - http://example.com/crm-system? action=remove_all_entries&mode=immediate I still believe the disadvantages, e.g. more difficult debugging and inconvenient interaction from the command line, outweigh this, but as you asked for advantages ... Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Stefan Tilkov wrote: > On May 31, 2007, at 12:14 PM, Jon Hanna wrote: > >> Okay, what advantages are there in having a URI that can't be read by >> humans and isn't amenable to relative references? > > No-one will try to build them, manually or programatically, according > to some real or imagined recipe - which means that their very > unreadability forces you to rely on something else (hypermedia, > hopefully). Actually, that's a good point. Very often making something hard to hack has a greater effect upon discouraging bad hacking than good hacking.
Jon Hanna <jon@...> writes: > Actually, that's a good point. Very often making something hard to hack > has a greater effect upon discouraging bad hacking than good > hacking. I'd like to see more empirical evidence of that. Personally, I'd believe that there's no such thing as bad hacking. -- Nic Ferrier http://www.tapsellferrier.co.uk
Good summary article.
I think it's already won on the web, for consumer services. The battle
I think we're still wondering about is "the enterprise" -- using REST
to make business integration more evolvable & productive.
There, I don't think it's winning, necessarily, but I do think it's
almost finished "crossing the chasm". The Atom Publishing Protocol
seems to be the event driving that, but this is also helped by releases
like GData, the Facebook platform, Amazon S3, the Sam Ruby book, etc.
But WS-splat web services continue to flourish unabated in most
enterprises. I think that those looking at WSDL 2.0 and WS-Addressing
are either horrified or enthralled, depending on their value system.
Most still haven't heard of REST, and typically get a quizzical look
when I bring it up in my travels. The reaction is not negative, mind
you, but I think it's indicative of the stage of the journey.
The big adoption will occur when OSS groups & vendors come out with a
new breed of tools that don't just staple a bag labeled "REST" on the
side, but actually provide an agent-oriented platform that's based on
the architecture. All the effort has been too server-focused, in my
view.
We often dream about the leaps in flexibility and evolution over time
due to the constraints of REST, but it hasn't quite made its way into
the agent's development environment. But I don't think even the early
adopters know what this quite would look like. It could be radically
different than what we're used to. And it's a very different direction
from the current rage, BPM-land (tagline, "all your process are belong
to us").
Cheers
Stu
--- Stefan Tilkov <stefan.tilkov@...> wrote:
> Nothing new for folks around here, probably, but still:
>
> http://www.infoq.com/news/2007/05/is-rest-winning
>
> Stefan
> --
> Stefan Tilkov, http://www.innoq.com/blog/st/
>
____________________________________________________________________________________Ready for the edge of your seat?
Check out tonight's top picks on Yahoo! TV.
http://tv.yahoo.com/
Stefan Tilkov wrote: > > > Nothing new for folks around here, probably, but still: > > http://www.infoq.com/news/2007/05/is-rest-winning > <http://www.infoq.com/news/2007/05/is-rest-winning> The recent uptake of interest in REST is a mixed blessing. It's good to see, but I've seen arrant nonsense around transactions, binding, service descriptions; basically interpreting REST to mean whatever one wants to mean. That's fine for things like WS and SOA which are meaning free, but it simply won't do for a well-documented style. The technical value needs to be squarely protected from vendors, analysts marketing departments and on-the-wagon rpc coders. I've set my bozo bit for WS and SOA types who are repositioning themselves as REST stalwarts. Spotting a bandwagons is not an indicator of competence. cheers Bill
On 5/31/07, Bill de hOra <bill@...> wrote: > > I've seen arrant nonsense around transactions ... +1 > ... binding, service descriptions ... +1 with nobs on > ... basically interpreting REST to mean whatever one wants to > mean ... it simply won't do for a well-documented style. The truth is that it is non-trivial to transfer a mindset from RPC / WS-* / etc to REST - it was interesting to hear Tim Ewald say "I finally get REST. Wow." because that is the personal reaction you can see when 'the light switches on' and the worldview shifts. > The technical value > needs to be squarely protected from vendors, analysts marketing > departments and on-the-wagon rpc coders. *sigh* methinks that'll happen whatever we do! > I've set my bozo bit for WS and SOA types who are repositioning > themselves as REST stalwarts. Spotting a bandwagons is not an indicator > of competence. /me nods But, at the same time, let's not let the door slam shut in the face of those, like Tim, who are really getting onboard with the style. Regards, Alan Dean http://thoughtpad.net/alan-dean
> > I've seen arrant nonsense around transactions ... > > +1 > > > ... binding, service descriptions ... > > +1 with nobs on > > > ... basically interpreting REST to mean whatever one wants > to mean ... > > it simply won't do for a well-documented style. > > The truth is that it is non-trivial to transfer a mindset from RPC / > WS-* / etc to REST - it was interesting to hear Tim Ewald say > "I finally get REST. Wow." because that is the personal > reaction you can see when 'the light switches on' and the > worldview shifts. > > > The technical value > > needs to be squarely protected from vendors, analysts marketing > > departments and on-the-wagon rpc coders. > > *sigh* methinks that'll happen whatever we do! > > > I've set my bozo bit for WS and SOA types who are repositioning > > themselves as REST stalwarts. Spotting a bandwagons is not an > > indicator of competence. > > /me nods > > But, at the same time, let's not let the door slam shut in > the face of those, like Tim, who are really getting onboard > with the style. +1 This is somewhat what I brought up last October, but if REST is not to be misintrepretted I think it's going to take a group of 'RESTafarians' to get together, discuss, agree, and publish detailed documention as to what is good REST and what it is not. Certainly Roy's thesis is the final word on REST but it is not approachable for most, and it is definitely not a REST cookbook that explains how to handle various use-cases. Further, I believe a cookbook is what ~99% of people need to successfully implement an architecture style (and I am one of these 99%.) But the cookbook would need to be authoritative. Either the core REST community is going to come together with Roy's blessing, to shepherd the market maturation and to define an authoritative interpretation of the REST architecture style of common use cases, or it will all devolve into chaos and the term REST will be much sullied. This could take many forms, but one form could be providing proper guidance for and control of the "REST" (or someother associated) brand. I view what I'm suggesting as being analogous to when the web was at a crossroads back in the early 90s and TimBL had the foresight to form the W3C, an organization designed to shepherd the market maturation of the web and define protocols and provide guidance to ensure the web's interoperability and scalability. I'm not suggesting a parallel organization; this hypothetical group could be associated with or eventually even under the W3C, whatever works. But if ensuring that the industry implements REST sarchitecture according to the Roy's principles is important, as I think most of us here agree it is, then such as an effort is imperative. Who wants to do this? I raise my hand and I pretty sure from private conversation that Alan Dean does too... -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On 5/31/07, Mike Schinkel <mikeschinkel@...> wrote: > > Who wants to do this? I raise my hand and I pretty sure from private > conversation that Alan Dean does too... I concur with Mike, and raise my hand too. Alan
"Mike Schinkel" <mikeschinkel@...> writes: > This is somewhat what I brought up last October, but if REST is not to be > misintrepretted I think it's going to take a group of 'RESTafarians' to get > together, discuss, agree, and publish detailed documention as to what is > good REST and what it is not. Certainly Roy's thesis is the final word on > REST but it is not approachable for most, and it is definitely not a REST > cookbook that explains how to handle various use-cases. Further, I believe > a cookbook is what ~99% of people need to successfully implement an > architecture style (and I am one of these 99%.) But the cookbook would > need to be authoritative. I disagree a bit. I think we need a bunch more RESTfull frameworks out there. I hate frameworks but they're what people seem to want to use to write code. -- Nic Ferrier http://www.tapsellferrier.co.uk
On 5/31/07, Nic James Ferrier <nferrier@...> wrote: > > "Mike Schinkel" <mikeschinkel@...> writes: > > > This is somewhat what I brought up last October, but if REST is not to be > > misintrepretted I think it's going to take a group of 'RESTafarians' to get > > together, discuss, agree, and publish detailed documention as to what is > > good REST and what it is not. Certainly Roy's thesis is the final word on > > REST but it is not approachable for most, and it is definitely not a REST > > cookbook that explains how to handle various use-cases. Further, I believe > > a cookbook is what ~99% of people need to successfully implement an > > architecture style (and I am one of these 99%.) But the cookbook would > > need to be authoritative. > > I disagree a bit. I think we need a bunch more RESTfull frameworks out > there. > > I hate frameworks but they're what people seem to want to use to write > code. When you say that you disagree - do you mean that you think that the use-case / cookbook approach is without merit or is in conflict with framework development? Personally I think that the two can co-exist and, in the best case, improve each other but I would like to understand the basis of your disagreement. Alan
"Mike Schinkel" <mikeschinkel@...> writes: > Who wants to do this? I raise my hand and I pretty sure from private > conversation that Alan Dean does too... I raise my hand to any exhortation exercise. -- Nic Ferrier http://www.tapsellferrier.co.uk
"Alan Dean" <alan.dean@...> writes: > When you say that you disagree - do you mean that you think that the > use-case / cookbook approach is without merit or is in conflict with > framework development? I mean that it won't get us very far. If there are frameworks that do stuff and save people having to think about other stuff that would be a success. And let a million frameworks bloom. I hate them all so we may as well have loads. -- Nic Ferrier http://www.tapsellferrier.co.uk
On 5/31/07, Bill de hOra <bill@...> wrote: > I've seen arrant nonsense around transactions What particular arrant nonsense around transactions did you have in mind? We've discussed transactions a few times on this list, and I understand the new REST book has a treatment as well. Do you think: * RESTful transactions are impossible, or * some attempts at RESTful transactions have been nonsensical, or * assertions that you need WS-* to do transactions are nonsense, or * something else?
A good friend and colleague (Eliot Kimber) often says: "All tools suck, some just suck less." Here's to less sucky web frameworks! John Heintz On 5/31/07, Nic James Ferrier <nferrier@...> wrote: > "Alan Dean" <alan.dean@...> writes: > > > When you say that you disagree - do you mean that you think that the > > use-case / cookbook approach is without merit or is in conflict with > > framework development? > > I mean that it won't get us very far. > > If there are frameworks that do stuff and save people having to think > about other stuff that would be a success. > > And let a million frameworks bloom. I hate them all so we may as well > have loads. > > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
> If there are frameworks that do stuff and save people having > to think about other stuff that would be a success. > > And let a million frameworks bloom. I hate them all so we may > as well have loads. Just to clarify my view (and possibly to be pendantic), wouldn't a cookbook approach help developers create frameworks? Wouldn't the former empower (and mostly predate) the latter? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us > -----Original Message----- > From: Nic James Ferrier [mailto:nferrier@...] > Sent: Thursday, May 31, 2007 6:09 PM > To: Alan Dean > Cc: Mike Schinkel; REST Discuss > Subject: Re: [rest-discuss] Is REST Winning? > > "Alan Dean" <alan.dean@...> writes: > > > When you say that you disagree - do you mean that you think > that the > > use-case / cookbook approach is without merit or is in > conflict with > > framework development? > > I mean that it won't get us very far. > > If there are frameworks that do stuff and save people having > to think about other stuff that would be a success. > > And let a million frameworks bloom. I hate them all so we may > as well have loads. > > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk
> I disagree a bit. I think we need a bunch more RESTfull > frameworks out there. To-mayto, to-mah-to. :) I don't disgree with you, we need both. I just didn't elaborate that much in my comments. Basically, I was trying to say we need to provide the market leadership or the market will lead us to dismay. > I hate frameworks but they're what people seem to want to use > to write code. I rather like them, actually. I especially like using the ones I build myself. '-) -Mike > -----Original Message----- > From: Nic James Ferrier [mailto:nferrier@...] > Sent: Thursday, May 31, 2007 5:53 PM > To: Mike Schinkel > Cc: 'REST Discuss' > Subject: Re: [rest-discuss] Is REST Winning? > > "Mike Schinkel" <mikeschinkel@...> writes: > > > This is somewhat what I brought up last October, but if > REST is not to > > be misintrepretted I think it's going to take a group of > > 'RESTafarians' to get together, discuss, agree, and publish > detailed > > documention as to what is good REST and what it is not. Certainly > > Roy's thesis is the final word on REST but it is not > approachable for > > most, and it is definitely not a REST cookbook that explains how to > > handle various use-cases. Further, I believe a cookbook is > what ~99% of people need to successfully implement an > > architecture style (and I am one of these 99%.) But the > cookbook would > > need to be authoritative. > > I disagree a bit. I think we need a bunch more RESTfull > frameworks out there. > > I hate frameworks but they're what people seem to want to use > to write code. > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk
Stefan Tilkov wrote: > > Okay, what advantages are there in having a URI that can't > be read by > > humans and isn't amenable to relative references? > > No-one will try to build them, manually or programatically, > according to some real or imagined recipe - which means that > their very unreadability forces you to rely on something else > (hypermedia, hopefully). > It's also less likely that someone will build meaning into > URIs that you don't want to have there - > http://example.com/crm-system? > action=remove_all_entries&mode=immediate > > I still believe the disadvantages, e.g. more difficult > debugging and inconvenient interaction from the command line, > outweigh this, but as you asked for advantages ... I'm more extreme on this issue than probably anyone else here, but I want to say for the record that a discussion of the "advantages of URL that can't be understood by humans" w/o a huge disclaimer (much stronger than that given :) has the strong potential for people to rationalize why its "good for them to use obtuse URLs" when they'd prefer not to worry about it. Such people will dismiss 99 good reasons why URLs should be well designed and latch onto the 1 dubious reason for making them obtuse. In other words, people will hear what they want to hear: "confirmation bias." Better to give them no amunition as opposed to some. But that's JMTCW. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
"Mike Schinkel" <mikeschinkel@...> writes: >> If there are frameworks that do stuff and save people having >> to think about other stuff that would be a success. >> >> And let a million frameworks bloom. I hate them all so we may >> as well have loads. > > Just to clarify my view (and possibly to be pendantic), wouldn't a cookbook > approach help developers create frameworks? Wouldn't the former empower > (and mostly predate) the latter? Maybe. Personally, I think we're into the framework creation stage right now. There are various Java ones emerging. Django. Rails is now more or less REST capable. -- Nic Ferrier http://www.tapsellferrier.co.uk
> Maybe. > > Personally, I think we're into the framework creation stage right now. If we were beyond the cookbook stage, there wouldn't be so many discussions on this list about use-cases and who to accomplish things. Some people know it, but there is no source for those who don't know it to go find it so most end up reinventing it. Maybe we could create something for REST like the ActiveState (Perl/Python/PHP) Network? [1] Google adwords could probably self-fund it. -Mike [1] http://aspn.activestate.com/ASPN/ > -----Original Message----- > From: Nic James Ferrier [mailto:nferrier@...] > Sent: Thursday, May 31, 2007 6:51 PM > To: Mike Schinkel > Cc: 'Alan Dean'; 'REST Discuss' > Subject: Re: [rest-discuss] Is REST Winning? > > "Mike Schinkel" <mikeschinkel@...> writes: > > >> If there are frameworks that do stuff and save people > having to think > >> about other stuff that would be a success. > >> > >> And let a million frameworks bloom. I hate them all so we > may as well > >> have loads. > > > > Just to clarify my view (and possibly to be pendantic), wouldn't a > > cookbook approach help developers create frameworks? Wouldn't the > > former empower (and mostly predate) the latter? > > Maybe. > > Personally, I think we're into the framework creation stage right now. > > There are various Java ones emerging. Django. Rails is now > more or less REST capable. > > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk
Mike Schinkel wrote: Okay, I agree with the basic idea that cookbooks are, or at least can be good. > But the cookbook would > need to be authoritative. At this point I disagree. The cookbook would have to be a good cookbook. It would have to get at least 90% of things better than how they are done otherwise in some measurable way and would have to be at least 98% accurate in terms of what it describing actually being REST (it could survive a few SNAFUs, but bullschildt is worse than useful) and at least 50% of it would have to be about REST (a not-purely-REST book that could also do, take our recent discussion on URI design, as long as that was in a separate section clearly not labelled REST, then coolyaboolya). It would not have to be authoritative, indeed it would benefit considerably from not being authoritative on various counts: Firstly; Whether something is REST of not and whether something gets a benefit from using REST or not is a matter of computer science. You want to be authoritative you're going to have to write a computer science book. Computer science books are mainly read by computer scientists, they're sometimes written by hackers (but mainly those hackers that are also language lawyers; most hackers will read computer science too of course, but not in the same way or for the same reasons as computer scientists) and never by my-mother-said-all-the-money-was-in-computers type programmers. The computer scientists are already as well served or well disserved by the material out there as they're going to get (and they are the sort of people who read PhD dissertations anyway). The hackers like effiency, scalability and Good Things and have a natural distrust of some of ther personalities that we are (with varying degrees of accuracy) seen as arguing against. For various reasons they are well motivated as a community to come on board here. E.g. these are the two groups that are already here, and most of us on this list are already in one or the other camps (indeed, mostly both to some extent though describing my own accumen in computer science as primative is practically inflating it). The whole point of a cookbook is to get the my-mother-said-all-the-money-was-in-computers (those that stayed after dot-bomb because they realised that whatever their mother was saying they weren't cut out for a career in bio-science). The constraints necessary to be authoritative are counter to this; we need informative not normative descriptions here. Second; Authoritative means "set in stone" at least as far as some sort of versioning goes. Set in stone is good when either you have attained perfection (a hubristic idea) or there is some pressing technical need for such immutability (why standards tend to be immutable in any given version, though even they can be obsoleted by a different version). Thirdly; Authoritative implies an authority. There is no absolute authority here (not even Roy, on which matter more below, though he's as close as can be, still it's being a style means he can't be quite as much an authority on REST as say K&R were on C before ISO replaced them as the authority). There is no technical basis for selecting one and no moral basis for asserting one; if we go ahead and proclaim an authority why should anyone else listen to us on the matter. We do have people who know what they're talking about on either the entirety or a portion of the matter, and we also have some that are well-known (Roy being the first person to come to mind, Mark the second) but that's not the same as being able to be authoritative. The only advantage I can see in being authoritative is that more people would by that book than any rival. Good for royalties, but not good for adoption. A chapter giving "a description of REST so fast it'll make your head spin", then a breakdown of HTTP in the abstract with concrete examples from various client-side and server-side technologies, followed by a bunch of chapters showing how particular matters before (and this is an important bit) a chapter on "how it all fits together" (which will hopefully be when a lot of people have their moment of Satori is an approach I think. Then some general "these are also good things" on a bunch of web matters that are clearly not REST but which are not counter to it (your hobby-horse of URI design, a good basic understanding of Unicode [take my C++ examples out of http://www.hackcraft.net/xmlUnicode/ and fix it up a bit and you have what IMHO every hacker on the web should know - if not necessarily without having to check things every now and again - about that one] and so on as a value-add [hmm, some stuff on AJAX and how it can fit in or fail to fit in with REST would be good]). Baggsy I the chapter on how to deal with caching :) > Either the core REST community is going to come together with Roy's > blessing, to shepherd the market maturation and to define an authoritative > interpretation of the REST architecture style of common use cases, or it > will all devolve into chaos and the term REST will be much sullied. I disagree here too, on three different points. Firstly, I read about an architectural style and came to the conclusion that it was good theory that was also proven in practice. I didn't drink any Kool-Aid. I have a great amount of respect for Roy, but I don't think we need his blessing for anything. Certainly such a blessing is a very good sign that one is getting things right (and disagreement from him would be a serious warning sign), but the man is a great computer scientist and a lucid and clear writer, not a god. Devolving into chaos is preferable to descending into a personality-cult. Secondly, I think the only thing we *need* to do as a community is write good code. Ideally we should be writing publicly-visible, publicly-accessible code that is demonstrably successful. However some of us often get no further than reducing the amount of dependency upon session state in a bunch of code and replacing a few links that take harmful action with posts so that GET isn't being abused quite as much in an application that is still buggy in this regard when we're finished with it. Still, we do that and demonstrate the advantages to our colleagues and we're moving things forward. The only thing "shepherding market maturation" is likely to do is to develop a reputation for some of us. This is all good, but it's not important. Thirdly, the term REST will get sullied if it's successful. It just will. Think of any other concept in programming, e.g. OOP. Think of the hissy-fits over whether language A has OOP despite it not having guarantee B. With any term in computing if you can't hold up a CD-ROM and say "this is X here" then it'll get sullied. The only conter-examples I can think of are a few terms that were originally analogies and so actually sullied another word (e.g. "computer") or became obsoleted but outlived that obsolecence becoming identified with something they were no longer appropriate to (e.g. "program"). While I'm making my posts way to long I'll also add that the best computing books are generally short.
Stuart Charlton wrote: > The big adoption will occur when OSS groups & vendors come > out with a new breed of tools that don't just staple a bag > labeled "REST" on the side, but actually provide an > agent-oriented platform that's based on the architecture. > All the effort has been too server-focused, in my view. As a side note, I'd really like to see the REST architecture style of contrained interface and URLs for everything be adopted by some frameworks used for the *DESKTOP* development. I think it would be phenominally valuable for desktop application automation, interoperability, and usability to have the REST model of URLs and constraints. I use Windows and on Windows the closest thing to it (besides the browser itself) is Windows Explorer when "Display full path in title bar" is turned on. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Alan Dean wrote: > > > On 5/31/07, Mike Schinkel <mikeschinkel@... > <mailto:mikeschinkel%40gmail.com>> wrote: >> >> Who wants to do this? I raise my hand and I pretty sure from private >> conversation that Alan Dean does too... > > I concur with Mike, and raise my hand too. > > Alan I'll raise my hand to help as well. A Cookbook would help, and would probably inspire more frameworks. There has been a lot written on REST, but very little of it has been categorized or edited down to something more accessible for beginners. The Rest Wiki [1] might be a good starting point. [1] http://rest.blueoxen.net/cgi-bin/wiki.pl mike -- mikeyp@... http://www.snaplogic.org
Jon Hanna wrote: > > But the cookbook would > > need to be authoritative. > > At this point I disagree. The cookbook would have to be a > good cookbook. Again, I was probably being too convenient. The point is we need some authoritative source to give guidance so we don't have a bunch of different self-appointed "experts" telling people conflicts stories about what is good REST. We need a Pope, not a group of waring Mullahs. And it doesn't have to be a cookbook, although cookbook a would be otherwise useful too. It could come from Roy, but I don't think he has interest in leading such an initiative. So it needs to be a proxy for Roy, i.e. a small group of people he respects enough to entrust such authority and with whom there is a line of communication so that he can give direction in unclear cases. > The constraints necessary to be authoritative are counter to > this; we need informative not normative descriptions here. You are talking my description of "authoritative" far too literally (my fault for using the wrong term.) I mean just we need some generally respected source of information to cut down on "the bullschildt." :) > The only advantage I can see in being authoritative is that > more people would by that book than any rival. Good for > royalties, but not good for adoption. Lots of books would be good. Lots of books with comflicting descriptions of good REST would be bad. It's more about having something people can point to in order to end debate on topics that have already been resolved. > I disagree here too, on three different points. > > Firstly, I read about an architectural style and came to the > conclusion that it was good theory that was also proven in > practice. I didn't drink any Kool-Aid. I have a great amount > of respect for Roy, but I don't think we need his blessing > for anything. I've assumed that the whole community felt the need for his blessing based on comments from some. If I'm wrong, then no need for Roy after all I guess. Sorry Roy. '-) > Devolving into chaos is preferable to descending into a > personality-cult. Not that I don't necessarily agree with, assuming the personality is benevolent. OTOH, have you heard of this thing called Ruby on Rails... '-) > Secondly, I think the only thing we *need* to do as a > community is write good code. I respectfully say this is an idealistic view that misunderstands human nature. A large percentage of the population want to just be told what to do and how to do it, and for that they need a well known and well respected source. And I'm not being perjorative; I'm a leader in some areas of life; in others I just want someone else to do the thinking and to be told what and how. We can't all be experts on all things, and that's where and why well-known and well-respected sources play such a key role. As a counter example, just look at the state of our mass media these days... ;-) > The only thing "shepherding market maturation" is likely to > do is to develop a reputation for some of us. This is all > good, but it's not important. So what TimBL did for the web was not important? > Thirdly, the term REST will get sullied if it's successful. > It just will. Certainly, but it will get *more* sullied if it's widely misused than if it widely used well. > While I'm making my posts way to long I'll also add that the > best computing books are generally short. Yes, and sadly unlike the comments on thread... :) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Mike Schinkel wrote: > As a side note, I'd really like to see the REST architecture style of > contrained interface and URLs for everything be adopted by some frameworks > used for the *DESKTOP* development. I think it would be phenominally > valuable for desktop application automation, interoperability, and usability > to have the REST model of URLs and constraints. Why? Actually this does happen sometimes for various reasons. For one it can be useful sometimes to include local objects as part of the web (inverting the anti-pattern of assuming remote objects are the same as local ones by treating local ones as remote - quite a useful abstraction if you are mainly dealing with remote objects but have to deal with a small number of local ones). However. Looking at the constraints: Client-Server: Sometimes useful, sometimes not. Mainly useful if there will be more than one client calling into the server. Client-Stateless-Server: Less often useful. Can help with consistency in some cases, but it's main advantage is scalability and scalability issues on the desktop are different to those on a network. Definitely not *as* useful at least. Cache: Caching in a local context tends to be a very different matter. Caching because something is "far away" (e.g. a CPUs instruction cache) has both different requirements and different issues (ensuring freshness has a whole different bunch of pressures than on the web). Caching at a higher level tends to be a matter of something being hard to compute rather than hard to reach. Again a very different type of caching. Very often write-through caching is possible, even easy, in local contexts though it isn't in REST (why the spec says a PUT means all cached representations are cleared, rather than saying the cache can update straight from the PUT). Generally, not very often analogous. Uniform Interface: Very much less often useful. Useful a lot of the time and there are many analogies of various sorts, but it's also often useful that different code can see the same object through different interfaces when it comes to the desktop. I'd say offering a uniform interface is useful on the desktop, but constraining to one isn't. Layered System: Can be a useful abstraction, but it can also be useful to be able to by-pass it. Again, useful as an offered view but less useful as a constraint. Code-On-Demand: I think the advantages/disadvantages balance here is very much different to on the web. Scripts can be even more powerful on the desktop (because they can more often insist upon a given language and/or object model being supported) but they can pose even greater security and other problems (especially since our uniform interface will only ever be an agreed-upon constraint rather than an absolute one unless we go so far as to build a sandbox). The spate of worms around the turn of this century affecting Microsoft Office products were a case of how COD applies to the desktop. There's also less advantage (your app is on the desktop, the other code is on the desktop; just run the other code!). Uniform-Layered-Client-Cache-Stateless-Server with optional Code-On-Demand (AKA REST): Not at all clear how well these go together for a desktop app. I can see that perhaps I might go "hmm, this is pretty much a hypermedia system here, I'd probably gain at least more than I lost if I stuck to REST" though I don't I'd be 100% sure about that decision if I did make it. I very much doubt I'll ever go "hmm, let's not deal with these objects as local objects but produce hypermedia representations of them and work on those" unless I had a very good reason to from other requirements - most likely a matter of wanting to network it in the future. > I use Windows and on > Windows the closest thing to it (besides the browser itself) is Windows > Explorer when "Display full path in title bar" is turned on. I'm completely missing the connection here. There's something similar to a REST view in some of the more recent windows views on folders where they use hypermedia to link to related objects but that's one tiny piece of hypermedia-style navigation that is more reminiscient to REST than actually RESTful. I don't see how the full path in title bar helps.
"Mike Schinkel" <mikeschinkel@...> writes: >> While I'm making my posts way to long I'll also add that the >> best computing books are generally short. > > Yes, and sadly unlike the comments on thread... :) Quit your moaning. I was short. -- Nic Ferrier http://www.tapsellferrier.co.uk
> Quit your moaning. I was short. Guilty conscience? I was refering Jon... ;-p -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
> Why? If you have to ask, I doubt I can explain. Seriously. But I'll try. It's far easier to develop interoperably for the web than for the desktop. Why do you think that is? HTTP, content-types, etc. Basically: REST. IMHO, anyway. ;) Interoperability on desktop apps is not great just as SOAP webservice interoperability is not great. Using REST principles applied judiciously to the desktop could bring the same interoperability we see on the web to the desktop. Secondly, I'm always, always, always pining for some way to go back to a particular (course grained) point in an application. Having that point available in an LRL (Local Resource Locator) would be wonderful, just like having a path I can copy and paste from Windows Explorer to file open dialogs is wonderful. Thirdly it could unify the desktop and the web and eliminate the (in the future more and more) arbitrary distinction between local and Internet. > > I use Windows and on > > Windows the closest thing to it (besides the browser itself) is > > Windows Explorer when "Display full path in title bar" is turned on. > > I'm completely missing the connection here. > > There's something similar to a REST view in some of the more > recent windows views on folders where they use hypermedia to > link to related objects but that's one tiny piece of > hypermedia-style navigation that is more reminiscient to REST > than actually RESTful. > > I don't see how the full path in title bar helps. The path in the title bar is analogous to the URL that one can save and/or copy from one context and paste into another. If I want to go to the c:\foo\bar\baz\wa\zoo directory I don't have to click, click, click, click, click to get there, I can cut & paste a saved URL, or just select one from history and Windows Explorer takes me right there. I really wish I had such a thing in my email client, for example. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us P.S. Please note that I've just been musing on this subject for a while. Maybe I'm totally off-based. I could well be. I dunno. It just seems like a good idea right now.
I've been thinking about this type of idea. What would a system that combined REST, Plan9, and ReiserFS look like? Some things that are already the same: * URI/file path for identification * uniform interface * visible communication * client/server Some things that are close: * data driven application state * browser and shell are close Some things that are disjoint; * REST imposes stateless communication * Reiser exposes (or would someday) a query language (with closure) through the URI * Plan9 has namespace bindings (like unionfs) What do you think? John Heintz On 5/31/07, Mike Schinkel <mikeschinkel@...> wrote: > Stuart Charlton wrote: > > The big adoption will occur when OSS groups & vendors come > > out with a new breed of tools that don't just staple a bag > > labeled "REST" on the side, but actually provide an > > agent-oriented platform that's based on the architecture. > > All the effort has been too server-focused, in my view. > > As a side note, I'd really like to see the REST architecture style of > contrained interface and URLs for everything be adopted by some frameworks > used for the *DESKTOP* development. I think it would be phenominally > valuable for desktop application automation, interoperability, and usability > to have the REST model of URLs and constraints. I use Windows and on > Windows the closest thing to it (besides the browser itself) is Windows > Explorer when "Display full path in title bar" is turned on. > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org - http://t.oolicio.us > > > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
Mike Schinkel wrote: > We need a Pope, not a group of waring Mullahs. It makes sense for Catholics to favour a Pope, since they believe he is infallible when speaking ex cathedra. There isn't even any sense in which someone can speak ex cathedra about REST, and I personally wouldn't believe they were infallible if they could. I'm for warring Mullahs. (Or autonomous High Priestesses, which is much the same but generally without as much bloodshed). > Lots of books would be good. Lots of books with comflicting descriptions of > good REST would be bad. Good in some ways. The core idea of REST is simple. It's relatively simple to point out basic flaws in the core description of REST. Of course that doesn't make it impossible for people to write books, but an orchestrated campaign of book-burning is probably outside of our capabilities :) Bad books can't be prevented (why we have the term bullshildt in the first place, hey Shildt's publishers say he's an authority too) only good ones encouraged. Outside of that, when dealing with concrete cases matters will very quickly get into cases where differences of opinion will enter and also where matters outside of REST will have to be addressed (you can't cookbook without going beyond REST, your very first example will have to include some server settings or some code or at least some markup and immediately you've got something there that isn't just REST). Better to have disagreement than Lysenko-Michurinism. > I've assumed that the whole community felt the need for his blessing based > on comments from some. If I'm wrong, then no need for Roy after all I guess. I'm of the opinion that Roy is an extremely smart person and very good at conveying his ideas to the rest of us, along with being the person who we all have a debt to in this matter. For that reason any effort with his support is probably a safer bet than any he thinks is a bad idea. That's not the same as needing him. >> Devolving into chaos is preferable to descending into a >> personality-cult. > > Not that I don't necessarily agree with, assuming the personality is > benevolent. No, it always has negative effects, even in the cases where those who are so favoured have the good nature not to offend their fans combined with modesty not to believe in the personality cult themselves (you mention TimBL elsewhere, from his writing sthe man seems to be composed of at least 60% modesty sometimes :) It's also unfair on the poor personality (who became so admired because they were smart enough not to appreciate such a position). > I respectfully say this is an idealistic view that misunderstands human > nature. A large percentage of the population want to just be told what to > do and how to do it, and for that they need a well known and well respected > source. Point them to the code. Say, e.g. "see how that's faster and doesn't have that bug you get when the user opens another tab". Let them read the code (if it's not readable then I don't really count it as "publicly accessible"). This step towards copy-paste with a hope of it sometimes being copy-paste-understand is the same as a cookbooks (why we call them "cookbooks"). Not to say a cookbook couldn't *also* help though. But writing code is the one thing that we're screwed if we don't do. > We can't all be experts on all things, and that's where and why > well-known and well-respected sources play such a key role. Yes, but we can't expect to be the one field were the experts all agree either. >> The only thing "shepherding market maturation" is likely to >> do is to develop a reputation for some of us. This is all >> good, but it's not important. > > So what TimBL did for the web was not important? I don't think what TimBL did was shepherding market maturation either. Okay, there was some shepherding, or rather cat-herding, but that came from a mixture of relatively high-level education (which plenty here are already doing) and the W3C producing concrete specs (which isn't very directly applicable to what we're talking about apart from RFC 2616, and we don't need to write RFC 2616 because Roy et al already have) and TAG findings (quite a few of which already impact upon REST and already give us things to point to (or sadly to point away from). I like the idea of a cookbook becuase it's a clear idea to obtain a clear improvement: people read book => people understand REST. I'm not at all in agreement that there's any need for an authoritative book or any possibility of one. I don't think we're even able to prevent "REST" as a term being sullied, but we can get to the point where when management mis-use a term at least the better people at the codeface know what the word is meant to mean. You know what. I want REST to be sullied if we can keep actual techies knowing what it does mean. When it's being misused and salespeople are saying "of course it can do that, it's RESTful" in response to every customer query we'll have come a long way. Since I can't actually think of a single reason why most salespeople should have the vaugest clue hypermedia is better than "the machine just knowing where everything is already" so I can't get too upset that most won't. It's also inevitable because REST doesn't happen in a vacuum. As someone pointed out the sort of discussion we had earlier this week on URIs has a very real risk of sullying peoples definition of REST - and the people it's most important to get right. But expecting people to talk about URIs and to then say nothing more on them beyond "oh, they're opaque strings" is impossible to expect. Still, hype from the same old hype-merchants is likely to be less damaging that someone coming onto this list and thinking URI design is directly relevant to REST because they got the wrong end of the stick.
John D. Heintz wrote: > I've been thinking about this type of idea. > > What would a system that combined REST, Plan9, and ReiserFS look like? I'm not really familiar with Plan9 or ReiserFS. I'll try to read up on them... -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Hi Jon, On 5/31/07, Jon Hanna <jon@...> wrote: > Mike Schinkel wrote: > > As a side note, I'd really like to see the REST architecture style of > > contrained interface and URLs for everything be adopted by some frameworks > > used for the *DESKTOP* development. I think it would be phenominally > > valuable for desktop application automation, interoperability, and usability > > to have the REST model of URLs and constraints. > > Why? Because a consistent namespace (URI) with a consistent interface on the desktop would be a good thing. > > Actually this does happen sometimes for various reasons. For one it can > be useful sometimes to include local objects as part of the web > (inverting the anti-pattern of assuming remote objects are the same as > local ones by treating local ones as remote - quite a useful abstraction > if you are mainly dealing with remote objects but have to deal with a > small number of local ones). > > However. Looking at the constraints: > > Client-Server: Sometimes useful, sometimes not. Mainly useful if there > will be more than one client calling into the server. Look at what FUSE provides for Linux. Or the KDE and Gnome VFS projects. On windows a file system drive can be local or remote. Doesn't Perforce optionally mount as a windows drive? > > Client-Stateless-Server: Less often useful. Can help with consistency in > some cases, but it's main advantage is scalability and scalability > issues on the desktop are different to those on a network. Definitely > not *as* useful at least. Not as useful, but certainly not a drawback. > > Cache: Caching in a local context tends to be a very different matter. > Caching because something is "far away" (e.g. a CPUs instruction cache) > has both different requirements and different issues (ensuring freshness > has a whole different bunch of pressures than on the web). Caching at a > higher level tends to be a matter of something being hard to compute > rather than hard to reach. Again a very different type of caching. Very > often write-through caching is possible, even easy, in local contexts > though it isn't in REST (why the spec says a PUT means all cached > representations are cleared, rather than saying the cache can update > straight from the PUT). Generally, not very often analogous. Not useful except maybe for expensive things. > > Uniform Interface: Very much less often useful. Useful a lot of the time > and there are many analogies of various sorts, but it's also often > useful that different code can see the same object through different > interfaces when it comes to the desktop. I'd say offering a uniform > interface is useful on the desktop, but constraining to one isn't. I totally disagree. A uniform interface is useful everywhere. Unix has files with pipes, Plan9 has files everywhere. These are uniform interfaces and provide massive and cumulative value. > > Layered System: Can be a useful abstraction, but it can also be useful > to be able to by-pass it. Again, useful as an offered view but less > useful as a constraint. > > Code-On-Demand: I think the advantages/disadvantages balance here is > very much different to on the web. Scripts can be even more powerful on > the desktop (because they can more often insist upon a given language > and/or object model being supported) but they can pose even greater > security and other problems (especially since our uniform interface will > only ever be an agreed-upon constraint rather than an absolute one > unless we go so far as to build a sandbox). The spate of worms around > the turn of this century affecting Microsoft Office products were a case > of how COD applies to the desktop. There's also less advantage (your app > is on the desktop, the other code is on the desktop; just run the other > code!). This is useful everywhere a general client _can_ be extended by a service provider. That doesn't have a remote only boundary. > > Uniform-Layered-Client-Cache-Stateless-Server with optional > Code-On-Demand (AKA REST): Not at all clear how well these go together > for a desktop app. I can see that perhaps I might go "hmm, this is > pretty much a hypermedia system here, I'd probably gain at least more > than I lost if I stuck to REST" though I don't I'd be 100% sure about > that decision if I did make it. I very much doubt I'll ever go "hmm, > let's not deal with these objects as local objects but produce > hypermedia representations of them and work on those" unless I had a > very good reason to from other requirements - most likely a matter of > wanting to network it in the future. > > > I use Windows and on > > Windows the closest thing to it (besides the browser itself) is Windows > > Explorer when "Display full path in title bar" is turned on. > > I'm completely missing the connection here. > > There's something similar to a REST view in some of the more recent > windows views on folders where they use hypermedia to link to related > objects but that's one tiny piece of hypermedia-style navigation that is > more reminiscient to REST than actually RESTful. > > I don't see how the full path in title bar helps. > Windows provides a "browser" like view into the file systems, control panels, history, and a limited set of other "windows services". John Heintz -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
Jon Hanna <jon@...> writes: >> Lots of books would be good. Lots of books with comflicting descriptions of >> good REST would be bad. > > Good in some ways. > > The core idea of REST is simple. It's relatively simple to point out > basic flaws in the core description of REST. Of course that doesn't make > it impossible for people to write books, but an orchestrated campaign of > book-burning is probably outside of our capabilities :) Bad books can't > be prevented (why we have the term bullshildt in the first place, hey > Shildt's publishers say he's an authority too) only good ones encouraged. > > Outside of that, when dealing with concrete cases matters will very > quickly get into cases where differences of opinion will enter and also > where matters outside of REST will have to be addressed (you can't > cookbook without going beyond REST, your very first example will have to > include some server settings or some code or at least some markup and > immediately you've got something there that isn't just REST). Come on fella's. This is a bit silly. The central aim is a good one: collect a lot of experience in a more formal, recipe/cookbook way. That's step 1. If we then find that we violently disagree about some patterns (actually, I don't think that's very likely) then we'll deal with that then. But for now brain dump is required somewhere. -- Nic Ferrier http://www.tapsellferrier.co.uk
I would suggest the following for a quick (sort of :) intro to these systems. http://plan9.bell-labs.com/sys/doc/names.html http://www.namesys.com/whitepaper.html John Heintz On 5/31/07, Mike Schinkel <mikeschinkel@...> wrote: > John D. Heintz wrote: > > I've been thinking about this type of idea. > > > > What would a system that combined REST, Plan9, and ReiserFS look like? > > I'm not really familiar with Plan9 or ReiserFS. I'll try to read up on > them... > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org - http://t.oolicio.us > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
Steve Bjorg wrote: >> I'll do better and offer proof. :) >> "Pretty" is a human concept. Humans don't scale. Q.E.D. So where's the proof? '-) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Mike Schinkel wrote:
> It's far easier to develop interoperably for the web than for the desktop.
> Why do you think that is? HTTP, content-types, etc. Basically: REST. IMHO,
> anyway. ;)
No, there's more to it than REST. HTML, XML and CSS are really only
related to REST in that they can point to other resources and be
rendered incrementally.
There's a lot in them that also helps make the web interoperable.
That's also true of DOM and JS though in practice that's where the hard
bits are for interoperability.
We already have quite a lot of interoperability at the network layer
outside of HTTP.
I've never had interoperability problems with FTP, SMTP or POP due to
clients and servers differing in OS, and they aren't RESTful.
> Interoperability on desktop apps is not great just as SOAP webservice
> interoperability is not great.
I think one could make something that departed from REST the same way
that SOAP did and was just as interoperable. Actually, just get rid of
all the SOAP stacks except one and you might do so.
It would still suck, but for reasons other than interoperability. All
the systems doing it would suck in the same way, and so be interoperable.
> Using REST principles applied judiciously to
> the desktop could bring the same interoperability we see on the web to the
> desktop.
You've got a great hammer there, but that isn't a nail.
> Secondly, I'm always, always, always pining for some way to go
> back to a particular (course grained) point in an application. Having that
> point available in an LRL (Local Resource Locator) would be wonderful, just
> like having a path I can copy and paste from Windows Explorer to file open
> dialogs is wonderful.
I've actually done this in desktop apps. This isn't REST though.
> Thirdly it could unify the desktop and the web and
> eliminate the (in the future more and more) arbitrary distinction between
> local and Internet.
The distinctions are not arbitrary. The biggest reason for people
believing the infamous Fallacies of Distributed Computing[1] is that
they don't apply locally. A serious advantage in dealing with HTTP verbs
on URIs-as-nouns over RPC models is RPC models make the very
non-arbitrary distinction between local and Internet appear arbitrary.
REST is designed to deal with precisely the situations those fallacies
lead people to ignore.
If you really need to get rid of the distinction between the local and
the networked just build it networked and run it on a webserver on
localhost - but realise that that is a sacrifice of adopting REST in
situations where its benefits aren't as directly applicable made for the
gains in terms of it being seemless with the rest of the web, rather
than a direct boon to the desktop.
> The path in the title bar is analogous to the URL that one can save and/or
> copy from one context and paste into another.
Yes. I don't see how this is REST though.
> If I want to go to the
> c:\foo\bar\baz\wa\zoo directory I don't have to click, click, click, click,
> click to get there, I can cut & paste a saved URL, or just select one from
> history and Windows Explorer takes me right there. I really wish I had such
> a thing in my email client, for example.
I don't see how this is REST either.
There is no client-server. There's no stateless connection (indeed, no
connection). There is no cache. There is no uniform interface. There is
no layered system. Code-on-demand isn't even applicable as a concept
(you already have all the code).
0 out of 7 constraints met.
Of the data elements:
Resource: Closest thing is the files. Lets give you that one rather than
argue - 1 data element.
URI: Yep, can even be a real URI ("file:///...")
Represetation: This is a big one where your argument falls down. Okay,
if you have an NTFS file system you can have different streams in a file
and sort of kludge it that way, but you can't get to that easily through
what's put in front of you. On balance; no.
Representaiton metadata: No distinction between rep, res and ctrl
Resource metadata: No distinction rep, res and ctrl
Control metadata: No distionction rep, res and ctrl
2 out of 6 at a push. 1 out of 6 really as talking about files as
resources is okay, but talking about resources as files is not only
wrong but dangerously so (since it's common mistake).
Connectors:
I can't even find a force analogy for most of them. Let's say explorer
is a client, the file system a server and leave it at 2 out of 5.
Components:
2 out of 4 for the same reason.
Similarity to REST therefore is (0/7) * (2/6) * (2/5) * (2*4) = 0%
[1] http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing
Nic James Ferrier wrote: > The central aim is a good one: collect a lot of experience in a more > formal, recipe/cookbook way. > > That's step 1. > > If we then find that we violently disagree about some patterns > (actually, I don't think that's very likely) then we'll deal with that > then. Yeah. Let's just write a book already.
Jon Hanna wrote: > > We need a Pope, not a group of waring Mullahs. > > It makes sense for Catholics to favour a Pope, since they > believe he is infallible when speaking ex cathedra. Are you assuming I'm Catholic? If so, you'd be very wrong... :) > There isn't even any sense in which someone can speak ex > cathedra about REST, and I personally wouldn't believe they > were infallible if they could. > > I'm for warring Mullahs. (Or autonomous High Priestesses, > which is much the same but generally without as much bloodshed). And I used to like the way you think... '-) > > Lots of books would be good. Lots of books with comflicting > > descriptions of good REST would be bad. > > Good in some ways. > > The core idea of REST is simple. It's relatively simple to > point out basic flaws in the core description of REST. Of > course that doesn't make it impossible for people to write > books, but an orchestrated campaign of book-burning is > probably outside of our capabilities :) > Bad books can't be > prevented (why we have the term bullshildt in the first > place, hey Shildt's publishers say he's an authority too) > only good ones encouraged. I'm not trying to prevent, just catalyze guidance for encouragement. But you have do coddle human nature 'cause you ain't gonna change it. > Outside of that, when dealing with concrete cases matters > will very quickly get into cases where differences of opinion > will enter and also where matters outside of REST will have > to be addressed (you can't cookbook without going beyond > REST, your very first example will have to include some > server settings or some code or at least some markup and > immediately you've got something there that isn't just REST). In the case of differing opinions, you document both. > Better to have disagreement than Lysenko-Michurinism. You read far to much into my comments! Well-respected guidance is not a bad thing. > > I've assumed that the whole community felt the need for his > blessing > > based on comments from some. If I'm wrong, then no need for > Roy after all I guess. > > I'm of the opinion that Roy is an extremely smart person and > very good at conveying his ideas to the rest of us, along > with being the person who we all have a debt to in this > matter. For that reason any effort with his support is > probably a safer bet than any he thinks is a bad idea. That's > not the same as needing him. We don need no stinkin' Roy! (with apologies to Mel Brooks...) > > >> Devolving into chaos is preferable to descending into a > >> personality-cult. > > > > Not that I don't necessarily agree with, assuming the > personality is > > benevolent. > > No, it always has negative effects, even in the cases where > those who are so favoured have the good nature not to offend > their fans combined with modesty not to believe in the > personality cult themselves (you mention TimBL elsewhere, > from his writing sthe man seems to be composed of at least > 60% modesty sometimes :) Your taking some of my comments way to seriously. BTW, I was quite impressed with the story in "Weaving the Web." Maybe I was hoodwinked, who knows. > > I respectfully say this is an idealistic view that misunderstands > > human nature. A large percentage of the population want to just be > > told what to do and how to do it, and for that they need a > well known > > and well respected source. > > Point them to the code. Whose code? And who does the pointing? And when they point at differing code? Interoperability doth not come from different viewpoints. Moderation in all things. You might just be being a contrarian, but to hear your arguments one would believe you support only full anarchy (well, maybe 90% anarchy. :) > Not to say a cookbook couldn't *also* help though. But > writing code is the one thing that we're screwed if we don't do. I'm all for code. Assuming we can agree on what code to write and document the disagreements. > Yes, but we can't expect to be the one field were the experts > all agree either. You assumed that I said that. I didn't. An authoritative source can say where there are equally good approached and/or multiple approaches each with pros and cons. > > So what TimBL did for the web was not important? > > I don't think what TimBL did was shepherding market maturation either. > > Okay, there was some shepherding, or rather cat-herding, but > that came from a mixture of relatively high-level education > (which plenty here are already doing) and the W3C producing > concrete specs (which isn't very directly applicable to what > we're talking about apart from RFC 2616, and we don't need to > write RFC 2616 because Roy et al already have) and TAG > findings (quite a few of which already impact upon REST and > already give us things to point to (or sadly to point away from). Are we nitpicking word definitions? As I said to Nic, to-may-to, to-mah-to. Guiding REST on its path of growth and helping it to avoid the abysis. Whatever words you want to use. > When it's being > misused and salespeople are saying "of course it can do that, > it's RESTful" in response to every customer query we'll have > come a long way. We've come a long way already. They are. > Since I can't actually think of a single > reason why most salespeople should have the vaugest clue > hypermedia is better than "the machine just knowing where > everything is already" so I can't get too upset that most won't. But they COULD understand the pass/fail of a logo/certification program, for example. And it could be an open certification, i.e. people in the community could be approved to certify so that open source could be certified. Open your mind to the possibilities... -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
> Yeah. Let's just write a book already. Collectively self publish? Whose on board... -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Jon Hanna <jon@...> writes: > Nic James Ferrier wrote: >> The central aim is a good one: collect a lot of experience in a more >> formal, recipe/cookbook way. >> >> That's step 1. >> >> If we then find that we violently disagree about some patterns >> (actually, I don't think that's very likely) then we'll deal with that >> then. > > > Yeah. Let's just write a book already. Chapter 1. It was a dark, dark, stormy night and there was a schooner in the channel. The captain said to one of his men "send a SOAP request to the lighthouse would you to make sure we're ok to go past?". And then the schooner sank. But the SOAP stack vendor assured the crew's wives that it was not the fault of the SOAP call that the responding message had failed to be delivereed. Instead it was the crews failure to use the correct WSDL discovery that had led to their demise. The widows were much comforted by this cheerfull news. All except one, Royamena Paddocks.... I'll leave Chapter 2 to the rest of you (err.... sorry, pun intended again). -- Nic Ferrier http://www.tapsellferrier.co.uk
Bob Haugen wrote: > > > On 5/31/07, Bill de hOra <bill@... <mailto:bill%40dehora.net>> wrote: > > I've seen arrant nonsense around transactions > > What particular arrant nonsense around transactions did you have in mind? Here's how this seems to work. You take a processing model (eg transactions) you think is necessary and fundamental solution to a problem (eg value exchange). After a while, you figure that model isn't going to work out in the new environment. You conclude that the WWW (or REST) is only suited for simple things. A more accurate conclusion is that said model might be contingent to states of affairs, no matter how knowledge or tooling you have build up around it. One then might want to seek a model that solves the problem and whose semantics will hold on the WWW. Feel free to replace "WWW" with "Internet". > > We've discussed transactions a few times on this list, and I > understand the new REST book has a treatment as well. > > Do you think: > * RESTful transactions are impossible, or > * some attempts at RESTful transactions have been nonsensical, or > * assertions that you need WS-* to do transactions are nonsense, or > * something else? I thinking starting with transactions as /the/ way to solve value exchange is starting from the wrong place; people seem to end up complaining that the web is only good for simple things, or that the web needs to be fixed, and go down the same rathole the RPC and BPEL crowds went. Except for controlled cases (like Subversion commits) the WWW is the wrong environment for ACID semantics. cheers Bill
Comments on the equation below.
On 5/31/07, Jon Hanna <jon@...> wrote:
>
> There is no client-server. There's no stateless connection (indeed, no
> connection). There is no cache. There is no uniform interface. There is
> no layered system. Code-on-demand isn't even applicable as a concept
> (you already have all the code).
>
> 0 out of 7 constraints met.
There certainly is client-server. Windows and UNIX provide VFS and
purely virtual file system abstractions. Plan9 exposes almost
everything through 9P (a client-server protocol).
There is a cache as well: use windows to browse into a .zip file.
First time it's slow, then much faster.
The cmd line shell and graphical browsers are clients to a uniform
(but not COD) interface. Windows exposes the my printers and the
control panel inside Explorer, Gnome and KDE expose even more things
through a VFS.
Layered system and COD? Well, those aren't very common. I would argue
they should be an option.
>
> Of the data elements:
> Resource: Closest thing is the files. Lets give you that one rather than
> argue - 1 data element.
> URI: Yep, can even be a real URI ("file:///...")
> Represetation: This is a big one where your argument falls down. Okay,
> if you have an NTFS file system you can have different streams in a file
> and sort of kludge it that way, but you can't get to that easily through
> what's put in front of you. On balance; no.
Windows does fall down on this, especially because many things in
Explorer aren't available in the cmd.exe shell.
Linux with FUSE, Gnome, and KDE certainly provide this type of capability.
There is no sign on of conneg is these file system apis though, that
is clearly missing.
> Representaiton metadata: No distinction between rep, res and ctrl
> Resource metadata: No distinction rep, res and ctrl
> Control metadata: No distionction rep, res and ctrl
Windows provides streams, Linux provides xattrs, Reiser provides...
well anything.
>
> 2 out of 6 at a push. 1 out of 6 really as talking about files as
> resources is okay, but talking about resources as files is not only
> wrong but dangerously so (since it's common mistake).
Posix files can be pipes and sockets. These can be backed by any
service implemenation. Plan9 models tcp and upd connects as files!
>
> Connectors:
> I can't even find a force analogy for most of them. Let's say explorer
> is a client, the file system a server and leave it at 2 out of 5.
>
> Components:
> 2 out of 4 for the same reason.
Lots of things mentioned above could be connectors or components. I'll
grant it's not too common. FUSE moving into the Linux kernel is
speeding the development of filesystem components and connectors. For
examples, see the list of file systems implemented in FUSE:
http://fuse.sourceforge.net/wiki/index.php/FileSystems
>
> Similarity to REST therefore is (0/7) * (2/6) * (2/5) * (2*4) = 0%
>
The number are currently higher than this, and I would argue we should
be pushing the envelope further.
> [1] http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
John D. Heintz
Principal Consultant
New Aspects of Software
Austin, TX
(512) 633-1198
Jon Hanna wrote:
> No, there's more to it than REST. HTML, XML and CSS are
> really only related to REST in that they can point to other
> resources and be rendered incrementally.
>
> There's a lot in them that also helps make the web interoperable.
>
> That's also true of DOM and JS though in practice that's
> where the hard bits are for interoperability.
>
> We already have quite a lot of interoperability at the
> network layer outside of HTTP.
>
> I've never had interoperability problems with FTP, SMTP or
> POP due to clients and servers differing in OS, and they
> aren't RESTful.
I get the impression you are related to that genie on The X Files where
everyone who made a wish got the letter of what they asked for but not the
spirit. The Genie kept saying "Well, you didn't specify..."
>
> > Interoperability on desktop apps is not great just as SOAP
> webservice
> > interoperability is not great.
>
> I think one could make something that departed from REST the
> same way that SOAP did and was just as interoperable.
> Actually, just get rid of all the SOAP stacks except one and
> you might do so.
It's hard to debate a hypothetical...
> It would still suck, but for reasons other than
> interoperability. All the systems doing it would suck in the
> same way, and so be interoperable.
All systems suck. Some just suck less. :)
> > Using REST principles applied judiciously to the desktop
> > could bring the same interoperability we see on the web
> > to the desktop.
>
> You've got a great hammer there, but that isn't a nail.
I like to think that it is more like a chisel [1]. '-)
> > Secondly, I'm always, always, always pining for some way to
> go back to
> > a particular (course grained) point in an application. Having that
> > point available in an LRL (Local Resource Locator) would be
> wonderful,
> > just like having a path I can copy and paste from Windows
> Explorer to
> > file open dialogs is wonderful.
>
> I've actually done this in desktop apps. This isn't REST though.
I didn't say it was REST. I said REST would provide the functionality that
would be equivalent.
> > Thirdly it could unify the desktop and the web and
> eliminate the (in
> > the future more and more) arbitrary distinction between local and
> > Internet.
>
> The distinctions are not arbitrary. The biggest reason for
> people believing the infamous Fallacies of Distributed
> Computing[1] is that they don't apply locally. A serious
> advantage in dealing with HTTP verbs on URIs-as-nouns over
> RPC models is RPC models make the very non-arbitrary
> distinction between local and Internet appear arbitrary.
Fallacies of Distributed Computing do not necessarily imply an inverse
equivalence of Fallacies of Local Computing.
Your fallacy here is assuming that when y= func(x) that x= func(y).
> REST is designed to deal with precisely the situations those
> fallacies lead people to ignore.
Which doesn't mean it can't be used locally.
> If you really need to get rid of the distinction between the
> local and the networked just build it networked and run it on
> a webserver on localhost -
That's wasn't the need, it was a happy by-product.
> but realise that that is a sacrifice of adopting REST in
> situations where its benefits
> aren't as directly applicable made for the gains in terms of
> it being seemless with the rest of the web, rather than a
> direct boon to the desktop.
Just because every benefit is not applicable doesn't invalidate all
benefits.
> > The path in the title bar is analogous to the URL that one can save
> > and/or copy from one context and paste into another.
>
> Yes. I don't see how this is REST though.
As above, I used it as an ANALOGY. Not as an specific example.
> There is no client-server.
Wrong. You yourself should know it does not take an Internet connection to
establish the roles of client and of server. One component calling another
in the same EXE can comprise the roles of client-server.
Add +1.
> There's no stateless connection (indeed, no connection).
> There is no cache.
> There is no uniform interface.
> There is no layered system.
There is no reason there can't be, and for these I'm arguing that there
should be. ESPECIALLY the uniform interface.
Add +4
> Code-on-demand isn't even applicable as a concept (you already have all
the code).
Take your argument here to the limit and we'd all be handcoding
parameterless assembler. This is an architecture concept and does not
require the seperation of Internet for it to be applicable.
Add +1
>
> 0 out of 7 constraints met.
>
Revised: 6 out of 6 constraints met (you miscounted, at least in your list.)
> Of the data elements:
> Resource: Closest thing is the files. Lets give you that one
> rather than argue - 1 data element.
A resource is a resource. Why obscure with files?
> URI: Yep, can even be a real URI ("file:///...")
A "real" URI? When is a URI not real?
> Represetation: This is a big one where your argument falls
> down.
Why?
> Okay, if you have an NTFS file system you can have
> different streams in a file and sort of kludge it that way,
> but you can't get to that easily through what's put in front
> of you. On balance; no.
You've made tons of assumptions in your arguments against, assumptions which
I did not make.
Maybe I should clarify. I was not referring to building the O/S using REST
(why reinvent that wheel?) I was referring to using REST to build the
things that people are constantly building; applications.
> Representaiton metadata: No distinction between rep, res and
> ctrl Resource metadata: No distinction rep, res and ctrl
> Control metadata: No distionction rep, res and ctrl
Only because of your faulty assumptions.
> 2 out of 6 at a push. 1 out of 6 really as talking about
> files as resources is okay, but talking about resources as
> files is not only wrong but dangerously so (since it's common
> mistake).
Revised: 6 out of 6.
> Connectors:
> I can't even find a force analogy for most of them. Let's say
> explorer is a client, the file system a server and leave it
> at 2 out of 5.
>
> Components:
> 2 out of 4 for the same reason.
I don't follow you here at all.
>
> Similarity to REST therefore is (0/7) * (2/6) * (2/5) * (2/4) = 0%
I can't fully correct until I understand the last two:
Similarity to REST therefore is (6/6) * (6/6) * (?/5) * (?/4) = ??%
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
[1]
http://en.wikipedia.org/wiki/Hammer#Tools_used_in_conjunction_with_hammers
P.S. You seem to be hell bent on being a contrarian today. Anything going
on we need to know about? ;-)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Mike" == Mike Schinkel <mikeschinkel@...> writes:
Mike> I'm more extreme on this issue than probably anyone else
Mike> here, but I want to say for the record that a discussion of
Mike> the "advantages of URL that can't be understood by humans"
Mike> w/o a huge disclaimer (much stronger than that given :) has
Mike> the strong potential for people to rationalize why its "good
Mike> for them to use obtuse URLs" when they'd prefer not to worry
Mike> about it.
Mike> Such people will dismiss 99 good reasons why URLs should be
Mike> well designed and latch onto the 1 dubious reason for making
Mike> them obtuse. In other words, people will hear what they want
Mike> to hear: "confirmation bias."
Mike, you make an excellent point here. Readable URLs are core to
coming up with some good REST architecture. If forces people to think
about resources: what they are, how they should be returned, etc.
I'm going to be even more extreme: there is no application that uses
obtuse URL that is a good show case for REST.
- --
Live long and prosper,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGX5nlIyuuaiRyjTYRApx3AJ9h4Js+iNx5BwxBxlOgHC6y7filgACg586o
AfMHbD23QDyF+HeuFk9MP6E=
=MrY4
-----END PGP SIGNATURE-----
> > I've set my bozo bit for WS and SOA types who are repositioning > themselves as REST stalwarts. Spotting a bandwagons is not an indicator > of competence. > Would the failure to spot the bandwagon five years ago (or more) be a different sort of indicator for these WS types?
> Maybe we could create something for REST like the ActiveState > (Perl/Python/PHP) Network? [1] > Google adwords could probably self-fund it. <pedantic strength="+9">adsense</pedantic>
> > Maybe we could create something for REST like the ActiveState > > (Perl/Python/PHP) Network? [1] > > Google adwords could probably self-fund it. > > <pedantic strength="+9">adsense</pedantic> <pedantic strength="+99">where do you think the adsense money ultimately comes from...?</pedantic> But I get your point. ;-) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
*sigh* the downside of living in the UK is that all the interesting conversations happen when you are in the land of nod. On 6/1/07, Bill de hOra <bill@...> wrote: > > > > > On 5/31/07, Bill de hOra <bill@... <mailto:bill%40dehora.net>> wrote: > > > I've seen arrant nonsense around transactions > > > ... WWW is the wrong environment for ACID semantics. Agreed. Two-phase commit over the web is brittle & highly latent. This is why WS-Transaction (AT) is a bad idea. http://en.wikipedia.org/wiki/WS-Transaction "Long-running" (aka compensatory) transactions are the way to go. http://en.wikipedia.org/wiki/Long_running_transaction Regards, Alan Dean http://thoughtpad.net/alan-dean
On 01/06/07, Jon Hanna <jon@...> wrote: <snipped/> Lots of good stuff. Particularly about the audience. I like the idea. I think its needed as a resource to point to. I'm more than willing to help. I'm unsure a wiki is right or wrong though it could be to get a document outline together. Offer. I'll buy a website. One year only. See if it works and interest lasts. I'll edit it. <caveat>docbook input please</caveat> I'm no REST expert, but I've lots of experience putting docbook together. I'll leave authority for change with this list. I'm currently unemployed so I have time to get it started. Who wants to put an outline together? regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
From Robin Covers pages REST and Web Services in WSDL 2.0 Eran Chinthaka, IBM developerWorks For clients to interact with remotely hosted resources, REpresentational State Transfer (REST) is fast becoming an alternative for Web services, especially because REST doesn't require users to understand and use SOAP. There are ongoing debates as to which one is better suited in today's highly interactive environment. However, recent efforts, including Web Services Description Language (WSDL) 2.0, have tried to give Web services the ability to benefit from REST and use REST concepts. The HTTP binding specification, available in WSDL 2.0 adjuncts, talks a lot about this. The first part of this article focuses on how REST is married to Web services in WSDL 2.0. The second part explains how it's being implemented in the Apache Web services project. Does WSDL 2.0 enable REST? The motivation of WSDL 2.0 HTTP binding is that it allows services to have both SOAP and HTTP bindings. The service implementation deals with processing application data, often represented as an XML element, and the service doesn't know whether that data came inside a SOAP envelope, HTTP GET, or HTTP POST. WSDL 2.0 HTTP binding enables you to expose a service as a resource to be invoked using HTTP methods. At the same time, you need to understand that HTTP binding doesn't enable you to implement a full REST style system. This is often debated by a lot of people, and it all depends on how much you believe in what REST can deliver. http://www.ibm.com/developerworks/webservices/library/ws-rest1/ -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
Mike Schinkel wrote: > It's far easier to develop interoperably for the web than for the > desktop. > Why do you think that is? HTTP, content-types, etc. Basically: > REST. IMHO, > anyway. ;) > > Interoperability on desktop apps is not great just as SOAP webservice > interoperability is not great. Using REST principles applied > judiciously to > the desktop could bring the same interoperability we see on the web > to the > desktop. Secondly, I'm always, always, always pining for some way > to go > back to a particular (course grained) point in an application. > Having that > point available in an LRL (Local Resource Locator) would be > wonderful, just > like having a path I can copy and paste from Windows Explorer to > file open > dialogs is wonderful. Thirdly it could unify the desktop and the > web and > eliminate the (in the future more and more) arbitrary distinction > between > local and Internet. A primary issue here is one of loose vs. strong coupling. In a desktop scenario it is feasible and acceptable to have strong coupling, because you have a much narrower domain of artifacts and more control over how those artifacts can behave. In a network environment you have heterogeneous systems with different architectures and approaches, so loose coupling is much more likely to succeed. People have used distributed technologies to build desktop applications in the past. For example, there's the CORBA infrastructure in GNOME, the now defunct Berlin project, and the various bus- and queue-based inter-application messaging frameworks in BSD, Solaris, etc. Those technologies, however, are still used in a very fine-grained, strongly coupled manner. I do agree with you that there are lessons to be learned from the interoperability measures and standardisation that has developed in the web space. If all filesystems were able to store MIME content- type and other metadata, it would simplify building cross-platform applications. I'm less sure about your concept of bookmarks for entire application state identified by "LRLs" or the general applicability of REST principles to desktop applications.
Since I find it easier to critique an existant document. Here's my view of how a cookbook might look. Order not considered. Introduction. What its all about HTTP, little background Why REST Alternatives, when to use HTTP Common sense view, applicability, interpretations When to use the verbs Terminology. Roys terms mapped to application. Managing state Possiby via simple examples? Representations. Alt media from a single resource, media types etc. Frameworks. ??? Examples. Many, simple examples with discussion. Now what have I missed? regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
On 6/1/07, Alan Dean <alan.dean@...> wrote: > On 6/1/07, Bill de hOra <bill@...> wrote: > > ... WWW is the wrong environment for ACID semantics. > > Agreed. > > Two-phase commit over the web is brittle & highly latent. This is why > WS-Transaction (AT) is a bad idea. > > http://en.wikipedia.org/wiki/WS-Transaction > > "Long-running" (aka compensatory) transactions are the way to go. > > http://en.wikipedia.org/wiki/Long_running_transaction 2PC != ACID. I agree about ACID and WS-AT, but one of my previous transaction mentors claimed that some form of 2PC is unavoidable for coordination among independent agents. He was smarter than me, so I'll tentatively believe it. Compensation is extremely difficult in many (maybe most) cases. Can you really undo all of the effects of a set of distributed actions? Another pattern exists that is 2PC but not ACID, variously called Provisional-Final, Tentative Business Operations (by Pat Helland), or Escrow. Does not require compensation or holding locks across phases. Basic idea is that in the first phase, you update resources provisionally (implementation cd be a separate entity). Then in the 2nd phase, if the transaction commits, update finally (update final entity, e.g. "real" resource). Or if the transaction cancels, delete the provisional entity. In REST, probably uses a separate transaction resource, or at least that's true of all the RESTful transaction proposals I've seen on this list or in the recent book. I don't know why Helland opposes this pattern to 2PC, because it is 2PC, just not ACID. Probably other patterns cd be found that are 2PC but not ACID, that wd also work RESTfully.
"Dave Pawson" <dave.pawson@...> writes: > On 01/06/07, Jon Hanna <jon@...> wrote: > > <snipped/> Lots of good stuff. Particularly about the audience. > > I like the idea. > I think its needed as a resource to point to. > I'm more than willing to help. > > I'm unsure a wiki is right or wrong > though it could be to get a document outline together. > > Offer. > I'll buy a website. One year only. See if it works and interest lasts. > I'll edit it. > <caveat>docbook input please</caveat> No thanks. It would have to be easy for me to contribute. I'm certainly not getting into docbook. Be good to move this discussion away as soon as possible though. We don't want to crowd this list out. -- Nic Ferrier http://www.tapsellferrier.co.uk
"Dave Pawson" <dave.pawson@...> writes: > Since I find it easier to critique an existant document. > Here's my view of how a cookbook might look. Order not considered. > > > Introduction. > What its all about > HTTP, little background > Why REST > Alternatives, when to use > > HTTP > Common sense view, applicability, interpretations > When to use the verbs > > Terminology. > Roys terms mapped to application. > > Managing state > Possiby via simple examples? > > Representations. > Alt media from a single resource, media types etc. > > Frameworks. > ??? > > Examples. > Many, simple examples with discussion. > > > Now what have I missed? I think we should move this discussion elsewhere. I've setup a list: rest-cookbook-discuss@... You can also subscribe online: http://sandypit.tapsellferrier.co.uk/cgi-bin/mailman/listinfo/rest-cookbook-discuss Dave? Can you repost to this list? -- Nic Ferrier http://www.tapsellferrier.co.uk
Mike Schinkel wrote: > Are you assuming I'm Catholic? If so, you'd be very wrong... :) No, I'm extending your metaphor. On a point of order though, let's both agree to avoid analogies to religious positions. We run a risk of offending someone and I personally run a risk of enjoying my analogies too much and letting them get the better of themselves (matches too nicely with one of my other fields of interest).
John D. Heintz wrote: > There certainly is client-server. Windows and UNIX provide VFS and > purely virtual file system abstractions. Plan9 exposes almost > everything through 9P (a client-server protocol). And then certain specialised applications deliberately by-pass that. And they should do. Most apps shouldn't but some should. And this is where it stops being a client-server constraint and starts being a client-server service that's there if you want it but not if you don't. The same is true and should be true for the rest of desktop computing. Can experience from REST inform desktop computing - yes, though in my case probably too much (I'm if anything biased towards using a method that works on the web since I don't code for the desktop much). Is the style directly applicable - I think no. This is also true of other web-like matters (I think Mike's point about address bars is something where the experience of Windows explorer is akin to that of the web, but isn't much related to REST).
From the article: "Does WSDL 2.0 enable REST? The motivation of WSDL 2.0 HTTP binding is that it allows services to have both SOAP and HTTP bindings. The service implementation deals with processing application data, often represented as an XML element, and the service doesn't know whether that data came inside a SOAP envelope, HTTP GET, or HTTP POST. WSDL 2.0 HTTP binding enables you to expose a service as a resource to be invoked using HTTP methods. At the same time, you need to understand that HTTP binding doesn't enable you to implement a full REST style system. This is often debated by a lot of people, and it all depends on how much you believe in what REST can deliver." This at least seems to show the author is aware that what WSDL 2 supports is POX over HTTP, not REST. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On Jun 1, 2007, at 10:36 AM, Dave Pawson wrote: > From Robin Covers pages > > REST and Web Services in WSDL 2.0 > Eran Chinthaka, IBM developerWorks > > For clients to interact with remotely hosted resources, > REpresentational > State Transfer (REST) is fast becoming an alternative for Web > services, > especially because REST doesn't require users to understand and use > SOAP. There are ongoing debates as to which one is better suited in > today's highly interactive environment. However, recent efforts, > including Web Services Description Language (WSDL) 2.0, have tried to > give Web services the ability to benefit from REST and use REST > concepts. > The HTTP binding specification, available in WSDL 2.0 adjuncts, > talks a > lot about this. The first part of this article focuses on how REST is > married to Web services in WSDL 2.0. The second part explains how it's > being implemented in the Apache Web services project. Does WSDL 2.0 > enable REST? The motivation of WSDL 2.0 HTTP binding is that it allows > services to have both SOAP and HTTP bindings. The service > implementation > deals with processing application data, often represented as an XML > element, and the service doesn't know whether that data came inside a > SOAP envelope, HTTP GET, or HTTP POST. WSDL 2.0 HTTP binding enables > you to expose a service as a resource to be invoked using HTTP > methods. > At the same time, you need to understand that HTTP binding doesn't > enable you to implement a full REST style system. This is often > debated > by a lot of people, and it all depends on how much you believe in what > REST can deliver. > > http://www.ibm.com/developerworks/webservices/library/ws-rest1/ > > -- > Dave Pawson > XSLT XSL-FO FAQ. > http://www.dpawson.co.uk > >
> Basic idea is that in the first phase, you update resources > provisionally (implementation cd be a separate entity). Then in the > 2nd phase, if the transaction commits, update finally (update final > entity, e.g. "real" resource). Or if the transaction cancels, delete > the provisional entity. Might be more clear if the provisionality was implemented as a resource property, e.g. "provisional" or "tentative" and then "committed" or "cancelled".
And again:
http://www.crummy.com/2007/05/31/2
<excerpt>
Transactions of the Transaction Society: Sam and I got a question from
reader Scott Davidson about the famous RESTful transaction design
(quoted at length by Jon Udell here, in case you bought so many copies
of the book that you're now deadlocked trying to decide which one to
look up page 231 in). I think it's worth responding to at length:
I'm perplexed why your transaction example in Chap. 8 didn't
simply create a transaction resource that included both checking &
savings account as well as the transfer amount w/ a PUT (defining the
resource as XML in the request body). Then you could simply call
/transaction/11a5/commit or even just assume that this is a request to
commit the transaction by default and avoid the 2nd call altogether.
Is there a specific reason why it was not done this way? I can already
see the "REST-haters" rolling their eyes to this three
request/response transaction pattern.
The short answer is that if I'd presented it that way, the
"REST-haters" would have an even better reason to roll their eyes: it
would look like I couldn't think of a resource-oriented way to do
transactions and I'd had to fall back to the RPC style.
</excerpt>
"Today, InfoQ publishes a sample chapter from RESTful Web Services, a book authored by Leonard Richardson and Sam Ruby. The book covers the principles of the REST style, and explains how to build RESTful applications using Ruby on Rails, Restlet (for Java) and Django (for Python). On this occasion, InfoQ's Stefan Tilkov had a chance to talk to the authors about their motivations for writing this book and their views on REST and Web services." http://www.infoq.com/articles/richardson-ruby-restful-ws Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ P.S.: Let me know if you believe links like these are inappropriate for this list.
On 6/1/07, Jon Hanna <jon@...> wrote: > John D. Heintz wrote: > > There certainly is client-server. Windows and UNIX provide VFS and > > purely virtual file system abstractions. Plan9 exposes almost > > everything through 9P (a client-server protocol). > > And then certain specialised applications deliberately by-pass that. How exactly do special apps by-pass the VFS? I'd really like some examples because I don't understand. I know that UnionFS can expose consistency problems if users modify underlying fs nodes.... > > And they should do. Most apps shouldn't but some should. Ok, so if most apps don't, then why invalidate the value of client-server (9P, local NFS, FUSE, Windows what-ya-ma-callit). > > And this is where it stops being a client-server constraint and starts > being a client-server service that's there if you want it but not if you > don't. I thought of a good example: USB. Today, I plug in a shiny new USB printer, and then have to detect and install the tightly coupled binary libraries to enable printin. Blech. Here's a vision of how it could work: 1) I plug in my printer and a new FS tree is exposed (/usb/345) 2) The OS inspects this new tree and crawls it for well supported document types 3) The OS finds a supported application/printer+xml format (at say /usb/345/printer) with an <print-form href="custom-form"> element 3.1) I'm given a choice to trust this new device service 4) Using that data a new printer is added to my system. 5) When I print something and choose "Print Properties" the /usb/345/print/custom-form representation is exposed (with full CoD) While I can tightly couple the code, I see distinct advantages to the loosely coupled CoD solution. > > The same is true and should be true for the rest of desktop computing. Most desktop computing is tightly coupled with poor interoperability, your right? So now that we agree on that, how would we take the lessons learned on the Web and apply them to the desktop? > > Can experience from REST inform desktop computing - yes, though in my > case probably too much (I'm if anything biased towards using a method > that works on the web since I don't code for the desktop much). > > Is the style directly applicable - I think no. Why don't you suggest how harm could be done by applying these constraints? > > This is also true of other web-like matters (I think Mike's point about > address bars is something where the experience of Windows explorer is > akin to that of the web, but isn't much related to REST). > It is related: Windows Explorer give us just a tiny bit of common identification and interface. I can create a shortcut on the desktop (bookmark) to a control panel icon, or a specific printer. Those are cool URIs. It all breaks down fast from there though. Why can't I have a shortcut to a meeting on Friday in outlook? Because that desktop app doesn't expose uniform identity or interfaces. > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
There is an interesting paper that describes this as Promises. http://www-db.cs.wisc.edu/cidr/cidr2007/papers/cidr07p36.pdf (I think this link came to me from Pat Helland's blog...) That paper describes three types of Promise: 1) Named view: promising a specific resource (room 412) 2) Anonymous view: promising $200 from a credit card acct 3) View via properties: promise by properties (non-smoking room, ocean view,...) John Heintz On 6/1/07, Bob Haugen <bob.haugen@...> wrote: > > Basic idea is that in the first phase, you update resources > > provisionally (implementation cd be a separate entity). Then in the > > 2nd phase, if the transaction commits, update finally (update final > > entity, e.g. "real" resource). Or if the transaction cancels, delete > > the provisional entity. > > Might be more clear if the provisionality was implemented as a > resource property, e.g. "provisional" or "tentative" and then > "committed" or "cancelled". > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
On 6/1/07, John D. Heintz <jheintz@...> wrote: > > It all breaks down fast from there though. Why can't I have a shortcut > to a meeting on Friday in outlook? Because that desktop app doesn't > expose uniform identity or interfaces. This is a change of topic ... but ... I wonder if PIM (personal information management) would be an excellent use-case for REST? Imagine a world where all the PIM apps out there implemented RESTful interfaces. Synchronization of appointments, reminders, contacts, etc would be a doddle. You could instruct an agent app to choose what synched to where and so on. Dismissal of a reminder on your office to start driving to a meeting could cause a reminder to be set from your mobile (cell on the other side of the pond) immediately prior to the meeting - and the two PIMs need not be the same and need not have knowledge of one another. ... gazes wistfully into the middle distance ... Alan
That sounds like a good idea. APP might be the perfect binding glue for the synchronizations... On 6/1/07, Alan Dean <alan.dean@...> wrote: > On 6/1/07, John D. Heintz <jheintz@...> wrote: > > > > It all breaks down fast from there though. Why can't I have a shortcut > > to a meeting on Friday in outlook? Because that desktop app doesn't > > expose uniform identity or interfaces. > > This is a change of topic ... but ... > > I wonder if PIM (personal information management) would be an > excellent use-case for REST? > > Imagine a world where all the PIM apps out there implemented RESTful > interfaces. Synchronization of appointments, reminders, contacts, etc > would be a doddle. You could instruct an agent app to choose what > synched to where and so on. Dismissal of a reminder on your office to > start driving to a meeting could cause a reminder to be set from your > mobile (cell on the other side of the pond) immediately prior to the > meeting - and the two PIMs need not be the same and need not have > knowledge of one another. > > ... gazes wistfully into the middle distance ... > > Alan > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
That's one of the projects I am working on. It's called Beatnik, is
written in SwingFX
and is available here:
https://sommer.dev.java.net/source/browse/sommer/trunk/misc/
AddressBook/www/
It's an early prototype.
With many services publishing foaf files (one of them had 15 million
in December) [1],
this is an application that can be immediately useful.
Since the data is out there in RDF formats, which is very flexible,
more flexible than DOM tools are good
for, one has to use some of the 500 rdf tools out there. [2]
The nice thing is it is very powerful. If one makes the right
architectural decisions, one can get some very interesting
features [3].
If people would like to help out on that project, feel free to join.
There is so much to do in this area, there is space for everyone, and
more.
Henry
[1] http://blogs.sun.com/bblfish/entry/15_million_foaf_files
[2] They were 250 when I posted this
http://blogs.sun.com/bblfish/entry/250_semantic_web_tools
now there are 500
[3] http://blogs.sun.com/bblfish/entry/beatnik_change_your_mind
On 1 Jun 2007, at 09:23, Alan Dean wrote:
> On 6/1/07, John D. Heintz <jheintz@...> wrote:
> >
> > It all breaks down fast from there though. Why can't I have a
> shortcut
> > to a meeting on Friday in outlook? Because that desktop app doesn't
> > expose uniform identity or interfaces.
>
> This is a change of topic ... but ...
>
> I wonder if PIM (personal information management) would be an
> excellent use-case for REST?
>
> Imagine a world where all the PIM apps out there implemented RESTful
> interfaces. Synchronization of appointments, reminders, contacts, etc
> would be a doddle. You could instruct an agent app to choose what
> synched to where and so on. Dismissal of a reminder on your office to
> start driving to a meeting could cause a reminder to be set from your
> mobile (cell on the other side of the pond) immediately prior to the
> meeting - and the two PIMs need not be the same and need not have
> knowledge of one another.
>
> ... gazes wistfully into the middle distance ...
>
> Alan
>
>
On 6/1/07, Max Voelkel <voelkel@...> wrote: > > I always thought the two communites shouldn't ignore each other so much ;-) > > [1] http://nepomuk.semanticdesktop.org/xwiki/ I will have a look. I also feel that the two communities have much in common and suspect that Danny Ayers would agree. I have been active providing feedback to the HTTP-in-RDF initiative at the W3C, for example. Regards, Alan Dean http://thoughtpad.net/alan-dean
> A primary issue here is one of loose vs. strong coupling. In > a desktop scenario it is feasible and acceptable to have > strong coupling, because you have a much narrower domain of > artifacts and more control over how those artifacts can > behave. In a network environment you have heterogeneous > systems with different architectures and approaches, so loose > coupling is much more likely to succeed. Respectfully I think that is a justification that is probably really a rationalization, the broader industry just not having realized it yet. I look at things like the loose coupling of Python and the tight coupling of C# and ask myself "Are we *really* better with C# than Python?" Things to ponder. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
John D. Heintz wrote: > Here's a vision of how it could work: > 1) I plug in my printer and a new FS tree is exposed (/usb/345) > 2) The OS inspects this new tree and crawls it for well > supported document types > 3) The OS finds a supported application/printer+xml format (at say > /usb/345/printer) with an <print-form href="custom-form"> element > 3.1) I'm given a choice to trust this new device service > 4) Using that data a new printer is added to my system. > 5) When I print something and choose "Print Properties" the > /usb/345/print/custom-form representation is exposed (with full CoD) > > While I can tightly couple the code, I see distinct > advantages to the loosely coupled CoD solution. +1 > > This is also true of other web-like matters (I think Mike's point > > about address bars is something where the experience of Windows > > explorer is akin to that of the web, but isn't much related > to REST). > > > > It is related: Windows Explorer give us just a tiny bit of > common identification and interface. > > I can create a shortcut on the desktop (bookmark) to a > control panel icon, or a specific printer. Those are cool URIs. > > It all breaks down fast from there though. Why can't I have a > shortcut to a meeting on Friday in outlook? Because that > desktop app doesn't expose uniform identity or interfaces. You explained it better than I could. Thanks. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Hi all:
To start, this is not about the "Is REST=CRUD?" debate. This is me asking
how to create a typical CRUD module for a web application that is RESTful.
And forgive the fact that this has probably already been answered 1000 times
before; if it has I haven't seen it.
What started this was someone asked the following on the TurboGears list:
> I've been refactoring my admin interface and trying to get it
> nice and RESTful ( oh the buzzwords! ). As I am new to this
> resting business, I'm wondering, is there a compelling reason
> to need to do:
>
> page/7/edit
> vs
> page/edit/7
>
> The example in the book uses the first, but I'm finding the
> second would be much easier to do with CherryPy.
>
> Thanks helpful gurus!
Several people replied with mechanics of TurboGears but nobody mentioned the
use of the edit verb so I decided to clarify and then realized that, while I
could tell him what not to do (i.e. "don't use verbs"), I couldn't tell him
how to do it correctly.
In the past when I have written such modules I would typically write them
like this:
http://examples.com/pages/{page_id}/?mode={mode}
Where {mode} was one of:
<nothing>
add
insert
edit
update
list
delete
confirm
And maybe a few more. BTW, these are not that "well designed" URLs, but
it's how I coded before I heavily researched URL concepts.
Note that there are several interesting pairs: "add" and "insert", "edit"
and "update", and "delete" and "confirm" Both "add" and "edit" displayed a
data entry form whereas "insert" and "update" insert and update from those
forms, respectively, with the latter 302 redirecting to <nothing> (a.k.a.
"show") and "confirm" displaying a delete confirmation screen with "delete"
performing the actual delete and 301 redirecting to "page/list" And when
"add" mode was requested it contained {mode} as a hidden field with a value
of "insert." When "edit" mode was requested, the hidden {mode} fields got a
value of "update." It all works well and good, but I now understand its not
RESTful.
To be RESTful, we GET "page/{page_id}" for display, PUT to "page/{page_id}"
for an update, DELETE "page/{page_id}" for a delete, and GET "page/list" for
a list. But how do we deal with getting an edit form before the PUT? How do
we handle delete confirmation? How do we handle requesting a data entry
form designed for a new page (POST to "page/new"?) Answer both assuming
AJAX and also no AJAX, if you will.
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
P.S. This soulds like a great cookbook solution and I debated where to
submit the question but defaulted to the larger community to ensure getting
more perspectives.
On Jun 2, 2007, at 11:34 PM, Mike Schinkel wrote:
> To be RESTful, we GET "page/{page_id}" for display, PUT to "page/
> {page_id}"
> for an update, DELETE "page/{page_id}" for a delete, and GET "page/
> list" for
> a list. But how do we deal with getting an edit form before the
> PUT? How do
> we handle delete confirmation? How do we handle requesting a data
> entry
> form designed for a new page (POST to "page/new"?) Answer both
> assuming
> AJAX and also no AJAX, if you will.
FWIW, Rails uses <collection>/<id>;edit to get the edit form, and
links a Javascript function to pop up a confirmation dialog for
DELETE (which is simulated by tunneling a hidden field through POST).
The semicolon has been changed to a slash in the most recent versions
(edge).
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
MS:
i'm relatively new to REST, but this is what i am doing for CRUD-type
situations:
define a single URI for the object/document/table you are working with:
/users/
GET /users/ returns the list of users
GET /users/{user_id} returns a single user object
POST /users/ with a body creates a new user (and returns a redirect to
the new /user/{user_id} that was created)
PUT /users/{user_id} with a body updates the existing user (optionally
redirect to the same location for the updated object or return no
body)
DELETE /users/{user_id} deletes the existing object
note that there is a single URI (w/ id decoration) and no verbs of any kind.
hope this helps.
mamund
On 6/2/07, Mike Schinkel <mikeschinkel@...> wrote:
> Hi all:
>
> To start, this is not about the "Is REST=CRUD?" debate. This is me asking
> how to create a typical CRUD module for a web application that is RESTful.
> And forgive the fact that this has probably already been answered 1000 times
> before; if it has I haven't seen it.
>
> What started this was someone asked the following on the TurboGears list:
>
> > I've been refactoring my admin interface and trying to get it
> > nice and RESTful ( oh the buzzwords! ). As I am new to this
> > resting business, I'm wondering, is there a compelling reason
> > to need to do:
> >
> > page/7/edit
> > vs
> > page/edit/7
> >
> > The example in the book uses the first, but I'm finding the
> > second would be much easier to do with CherryPy.
> >
> > Thanks helpful gurus!
>
> Several people replied with mechanics of TurboGears but nobody mentioned the
> use of the edit verb so I decided to clarify and then realized that, while I
> could tell him what not to do (i.e. "don't use verbs"), I couldn't tell him
> how to do it correctly.
>
> In the past when I have written such modules I would typically write them
> like this:
>
> http://examples.com/pages/{page_id}/?mode={mode}
>
> Where {mode} was one of:
>
> <nothing>
> add
> insert
> edit
> update
> list
> delete
> confirm
>
> And maybe a few more. BTW, these are not that "well designed" URLs, but
> it's how I coded before I heavily researched URL concepts.
>
> Note that there are several interesting pairs: "add" and "insert", "edit"
> and "update", and "delete" and "confirm" Both "add" and "edit" displayed a
> data entry form whereas "insert" and "update" insert and update from those
> forms, respectively, with the latter 302 redirecting to <nothing> (a.k.a.
> "show") and "confirm" displaying a delete confirmation screen with "delete"
> performing the actual delete and 301 redirecting to "page/list" And when
> "add" mode was requested it contained {mode} as a hidden field with a value
> of "insert." When "edit" mode was requested, the hidden {mode} fields got a
> value of "update." It all works well and good, but I now understand its not
> RESTful.
>
> To be RESTful, we GET "page/{page_id}" for display, PUT to "page/{page_id}"
> for an update, DELETE "page/{page_id}" for a delete, and GET "page/list" for
> a list. But how do we deal with getting an edit form before the PUT? How do
> we handle delete confirmation? How do we handle requesting a data entry
> form designed for a new page (POST to "page/new"?) Answer both assuming
> AJAX and also no AJAX, if you will.
>
> --
> -Mike Schinkel
> http://www.mikeschinkel.com/blogs/
> http://www.welldesignedurls.org
> http://atlanta-web.org - http://t.oolicio.us
>
> P.S. This soulds like a great cookbook solution and I debated where to
> submit the question but defaulted to the larger community to ensure getting
> more perspectives.
>
>
>
>
>
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
I like to think of forms as separate resources from the primary resource.
Off the top of my head (non-AJAX):
GET /users - lists users, has link to /users/creatorform
GET /users/creatorform - get form to create a new user
POST form to /users to create user
GET /users/{id} - view user, has links to /users/editorform and
/users/deleterform
GET /users/{id}/editorform - get form to edit a user
POST form to /users/{id} to update user
GET /users/deleterform - confirms delete
POST to /users/{id}/deleter to delete user
I'm sure there are arguments to be made for structuring the uri
hierarchy differently or using a semicolon in front of editform. Note
that I purposely choose to use .../editform rather than .../edit - I
want to be clear that I'm referring to a noun, and not a verb.
I seem to recall the we even split /users/editorform into multiple
forms when we had resources that were too complex or too large to be
edited on a single page. I don't recall the details, but I seem to
remember it went something like:
GET /users/{id}/editform1,
POST form to /users/{id}/editform1 and then get redirected to
/users/{id}/editform2
and so on (I may have this wrong - I forget the details of
transitioning to the second page)
I think the AJAX version would be almost the same except that you'd
PUT to /users/{id} to update and simply DELETE /users/{id} to delete.
--Chuck
On 6/2/07, Mike Schinkel <mikeschinkel@...> wrote:
> Hi all:
>
> To start, this is not about the "Is REST=CRUD?" debate. This is me asking
> how to create a typical CRUD module for a web application that is RESTful.
> And forgive the fact that this has probably already been answered 1000 times
> before; if it has I haven't seen it.
>
> What started this was someone asked the following on the TurboGears list:
>
> > I've been refactoring my admin interface and trying to get it
> > nice and RESTful ( oh the buzzwords! ). As I am new to this
> > resting business, I'm wondering, is there a compelling reason
> > to need to do:
> >
> > page/7/edit
> > vs
> > page/edit/7
> >
> > The example in the book uses the first, but I'm finding the
> > second would be much easier to do with CherryPy.
> >
> > Thanks helpful gurus!
>
> Several people replied with mechanics of TurboGears but nobody mentioned the
> use of the edit verb so I decided to clarify and then realized that, while I
> could tell him what not to do (i.e. "don't use verbs"), I couldn't tell him
> how to do it correctly.
>
> In the past when I have written such modules I would typically write them
> like this:
>
> http://examples.com/pages/{page_id}/?mode={mode}
>
> Where {mode} was one of:
>
> <nothing>
> add
> insert
> edit
> update
> list
> delete
> confirm
>
> And maybe a few more. BTW, these are not that "well designed" URLs, but
> it's how I coded before I heavily researched URL concepts.
>
> Note that there are several interesting pairs: "add" and "insert", "edit"
> and "update", and "delete" and "confirm" Both "add" and "edit" displayed a
> data entry form whereas "insert" and "update" insert and update from those
> forms, respectively, with the latter 302 redirecting to <nothing> (a.k.a.
> "show") and "confirm" displaying a delete confirmation screen with "delete"
> performing the actual delete and 301 redirecting to "page/list" And when
> "add" mode was requested it contained {mode} as a hidden field with a value
> of "insert." When "edit" mode was requested, the hidden {mode} fields got a
> value of "update." It all works well and good, but I now understand its not
> RESTful.
>
> To be RESTful, we GET "page/{page_id}" for display, PUT to "page/{page_id}"
> for an update, DELETE "page/{page_id}" for a delete, and GET "page/list" for
> a list. But how do we deal with getting an edit form before the PUT? How do
> we handle delete confirmation? How do we handle requesting a data entry
> form designed for a new page (POST to "page/new"?) Answer both assuming
> AJAX and also no AJAX, if you will.
>
> --
> -Mike Schinkel
> http://www.mikeschinkel.com/blogs/
> http://www.welldesignedurls.org
> http://atlanta-web.org - http://t.oolicio.us
>
> P.S. This soulds like a great cookbook solution and I debated where to
> submit the question but defaulted to the larger community to ensure getting
> more perspectives.
>
>
>
>
>
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
On 5/28/07, Bill de hOra <bill@...> wrote: > > I think one reason is that without conneg, you end up providing a URI > for each supported format, and URI proliferation is hardly a good thing. > A few systems do that now; the Zimbra API would be one, moinmoin is > another. Here's a simple example: > > xhtml: > <http://www.citizensinformation.ie/categories/money-and-tax/tax/duties-and-vat/stamp-duty-on-financial-cards> > > atom: > <http://www.citizensinformation.ie/categories/money-and-tax/tax/duties-and-vat/stamp-duty-on-financial-cards/entry.xml> > Is there any precedent or value in combining the techniques? I guess that would mean issuing a redirect in response to the negotiation headers. Using the above example, perhaps if a client requests the xhtml document but prefers or only accepts atom, the server responds with a 303 and a location header pointing to the atom document. -Ross
Mike,
This doesn't answer your question directly but it's a way to implement the
situation and can be used in CherryPy quite easily:
http://routes.groovie.org/manual.html#restful-services
It can be used AJAXian or not, of course, depending on how you implement it.
Scott
Mike Schinkel wrote:
> Hi all:
>
> To start, this is not about the "Is REST=CRUD?" debate. This is me asking
> how to create a typical CRUD module for a web application that is RESTful.
> And forgive the fact that this has probably already been answered 1000 times
> before; if it has I haven't seen it.
>
> What started this was someone asked the following on the TurboGears list:
>
>> I've been refactoring my admin interface and trying to get it
>> nice and RESTful ( oh the buzzwords! ). As I am new to this
>> resting business, I'm wondering, is there a compelling reason
>> to need to do:
>>
>> page/7/edit
>> vs
>> page/edit/7
>>
>> The example in the book uses the first, but I'm finding the
>> second would be much easier to do with CherryPy.
>>
>> Thanks helpful gurus!
>
> Several people replied with mechanics of TurboGears but nobody mentioned the
> use of the edit verb so I decided to clarify and then realized that, while I
> could tell him what not to do (i.e. "don't use verbs"), I couldn't tell him
> how to do it correctly.
>
> In the past when I have written such modules I would typically write them
> like this:
>
> http://examples.com/pages/{page_id}/?mode={mode}
>
> Where {mode} was one of:
>
> <nothing>
> add
> insert
> edit
> update
> list
> delete
> confirm
>
> And maybe a few more. BTW, these are not that "well designed" URLs, but
> it's how I coded before I heavily researched URL concepts.
>
> Note that there are several interesting pairs: "add" and "insert", "edit"
> and "update", and "delete" and "confirm" Both "add" and "edit" displayed a
> data entry form whereas "insert" and "update" insert and update from those
> forms, respectively, with the latter 302 redirecting to <nothing> (a.k.a.
> "show") and "confirm" displaying a delete confirmation screen with "delete"
> performing the actual delete and 301 redirecting to "page/list" And when
> "add" mode was requested it contained {mode} as a hidden field with a value
> of "insert." When "edit" mode was requested, the hidden {mode} fields got a
> value of "update." It all works well and good, but I now understand its not
> RESTful.
>
> To be RESTful, we GET "page/{page_id}" for display, PUT to "page/{page_id}"
> for an update, DELETE "page/{page_id}" for a delete, and GET "page/list" for
> a list. But how do we deal with getting an edit form before the PUT? How do
> we handle delete confirmation? How do we handle requesting a data entry
> form designed for a new page (POST to "page/new"?) Answer both assuming
> AJAX and also no AJAX, if you will.
>
On 6/1/07, Stefan Tilkov <stefan.tilkov@...> wrote: > From the article: > > "Does WSDL 2.0 enable REST? > The motivation of WSDL 2.0 HTTP binding is that it allows services to > have both SOAP and HTTP bindings. The service implementation deals > with processing application data, often represented as an XML > element, and the service doesn't know whether that data came inside a > SOAP envelope, HTTP GET, or HTTP POST. WSDL 2.0 HTTP binding enables > you to expose a service as a resource to be invoked using HTTP > methods. At the same time, you need to understand that HTTP binding > doesn't enable you to implement a full REST style system. This is > often debated by a lot of people, and it all depends on how much you > believe in what REST can deliver." > > This at least seems to show the author is aware that what WSDL 2 > supports is POX over HTTP, not REST. Eran is one of the Axis2 team...he's probably implemented that very feature.
On 5/25/07, Scott Chapman <scott_list@...> wrote: > > The bigger question that I'm wrestling with is, "How far do you take the > mapping of complex queries to the RESTful URL paradigm?" I.e. if you have a > query, "SELECT post_id FROM posts WHERE year(post_date) = 2007 and > mont(post_date) = 4" how do you map that to RESTful URL's? > This gets arbitrarily complex. > REST doesn't look like it was made to do a full mapping of URL's to SQL. That is correct, mostly because you are starting from the wrong end of the problem: "I have SQL, how do I 'map' that into HTTP". Instead you should be starting with a RESTful model of your service: http://bitworking.org/news/How_to_create_a_REST_Protocol And then build out those resources, which may require writing some SQL. -joe -- Joe Gregorio http://bitworking.org
On 6/1/07, Chris Burdess <dog@...> wrote: > > A primary issue here is one of loose vs. strong coupling. In a > desktop scenario it is feasible and acceptable to have strong > coupling, because you have a much narrower domain of artifacts and > more control over how those artifacts can behave. In a network > environment you have heterogeneous systems with different > architectures and approaches, so loose coupling is much more likely > to succeed. That's the first assumption to question. Yes, you still only have one sysadmin, but you do have very weak ability to do synchronous update of all app versions on a single box. Hence DLL hell, RPM hell, JAR hell, and whatever the equivalent for OSGi we are yet to see. NetKernel and Cocoon both use loosely coupled XML pipelines inside a single process, because shovelling XML between bits of code is both simple to do and gives better isolation between parts. Similarly, Ant's XML language is much less brittle than the Java APIs. and then there is the best example of all, the Unix pipe, where space, comma or tab separate lines provide a wire format that is app neutral. Premise 1: Loose coupling between libraries on a single process or machine can provide benefits. > > People have used distributed technologies to build desktop > applications in the past. For example, there's the CORBA > infrastructure in GNOME, the now defunct Berlin project, and the > various bus- and queue-based inter-application messaging frameworks > in BSD, Solaris, etc. Those technologies, however, are still used in > a very fine-grained, strongly coupled manner. Corba actually came out of Distributed NewWave and Sun's equivalent: it was desktops that drove them, or at least the impressive demos. > > I do agree with you that there are lessons to be learned from the > interoperability measures and standardisation that has developed in > the web space. If all filesystems were able to store MIME content- > type and other metadata, it would simplify building cross-platform > applications. I'm less sure about your concept of bookmarks for > entire application state identified by "LRLs" or the general > applicability of REST principles to desktop applications. > One interesting thought is what role Atom could have on the desktop. There are always various pub/sub mechanisms (windows has COM+ and SENS, linux is adopting DBus). What if lots of things were feed sources, other things transforms and finally ways of presenting stuff to the user. . you could have some fun there, which is good, because operating systems have got, well, dull. -steve
On 6/3/07, Steve Loughran <steve.loughran.soapbuilders@...> wrote:
>
> On 6/1/07, Chris Burdess <dog@...> wrote:
> >
> > A primary issue here is one of loose vs. strong coupling. In a
> > desktop scenario it is feasible and acceptable to have strong
> > coupling, because you have a much narrower domain of artifacts and
> > more control over how those artifacts can behave. In a network
> > environment you have heterogeneous systems with different
> > architectures and approaches, so loose coupling is much more likely
> > to succeed.
>
> That's the first assumption to question. Yes, you still only have one
> sysadmin, but you do have very weak ability to do synchronous update
> of all app versions on a single box. Hence DLL hell, RPM hell, JAR
> hell, and whatever the equivalent for OSGi we are yet to see.
>
> NetKernel and Cocoon both use loosely coupled XML pipelines inside a
> single process, because shovelling XML between bits of code is both
> simple to do and gives better isolation between parts. Similarly,
> Ant's XML language is much less brittle than the Java APIs.
>
> and then there is the best example of all, the Unix pipe, where space,
> comma or tab separate lines provide a wire format that is app neutral.
>
> Premise 1: Loose coupling between libraries on a single process or
> machine can provide benefits.
>
> >
> > People have used distributed technologies to build desktop
> > applications in the past. For example, there's the CORBA
> > infrastructure in GNOME, the now defunct Berlin project, and the
> > various bus- and queue-based inter-application messaging frameworks
> > in BSD, Solaris, etc. Those technologies, however, are still used in
> > a very fine-grained, strongly coupled manner.
>
> Corba actually came out of Distributed NewWave and Sun's equivalent:
> it was desktops that drove them, or at least the impressive demos.
>
> >
> > I do agree with you that there are lessons to be learned from the
> > interoperability measures and standardisation that has developed in
> > the web space. If all filesystems were able to store MIME content-
> > type and other metadata, it would simplify building cross-platform
> > applications. I'm less sure about your concept of bookmarks for
> > entire application state identified by "LRLs" or the general
> > applicability of REST principles to desktop applications.
> >
>
> One interesting thought is what role Atom could have on the desktop.
> There are always various pub/sub mechanisms (windows has COM+ and
> SENS, linux is adopting DBus). What if lots of things were feed
> sources, other things transforms and finally ways of presenting stuff
> to the user. . you could have some fun there, which is good, because
> operating systems have got, well, dull.
>
> -steve
Loose coupling of COM+ (etc.) libraries is a well known pattern in the
enterprise.
This is not the same as RESTful COM+ (etc.)
It is true that Roy makes a specific point that REST does not require
HTTP. You could, indeed, write RESTful COM+ code. I guess it would
look rather like this:
public interface Message
{
Dictionary<String, String> Headers { get; set; }
String Body { get; set; }
}
public interface IRequest : IMessage
{
Uri Identifier { get; set; }
}
public interface IResponse : IMessage
{
int Status { get; set; }
}
public interface IEndpoint
{
IResponse Request(IRequest message);
}
public interface IConnect
{
IEndpoint Resolve(Uri identifier)
}
Some sample client code would look like:
Uri identifier = new Uri("urn:foo");
IRequest request = new GetRequest(identifier);
// get a COM+ object instance
IEndpoint endpoint = new Connection().Resolve(identifier);
// call COM+ object instance (which might be local or remote)
IResponse response = endpoint.Request(request);
if (response.Status == 200) ...
In other words, anything RESTful will end up looking very like HTTP
even if it *isn't* HTTP. Not many COM+ abstractions look like this in
the wild.
Regards,
Alan Dean
http://thoughtpad.net/alan-dean
Chuck Hinson wrote:
> I like to think of forms as separate resources from the
> primary resource.
>
> Off the top of my head (non-AJAX):
>
> GET /users - lists users, has link to /users/creatorform GET
> /users/creatorform - get form to create a new user
> POST form to /users to create user
> GET /users/{id} - view user, has links to /users/editorform
> and /users/deleterform GET /users/{id}/editorform - get form
> to edit a user
> POST form to /users/{id} to update user GET
> /users/deleterform - confirms delete
> POST to /users/{id}/deleter to delete user
>
> I'm sure there are arguments to be made for structuring the
> uri hierarchy differently or using a semicolon in front of
> editform. Note that I purposely choose to use .../editform
> rather than .../edit - I want to be clear that I'm referring
> to a noun, and not a verb.
>
> I seem to recall the we even split /users/editorform into
> multiple forms when we had resources that were too complex or
> too large to be edited on a single page. I don't recall the
> details, but I seem to remember it went something like:
> GET /users/{id}/editform1,
> POST form to /users/{id}/editform1 and then get redirected to
> /users/{id}/editform2
>
> and so on (I may have this wrong - I forget the details of
> transitioning to the second page)
Thanks. So the "editform" et. al. is a noun. Hmm. I guess if "edit" where
used if could also be used as a noun.
I wonder if this is one of the places where the theoretical model of REST
breaks down for use in the real world? Pure REST would say:
1.) GET {resource} as representation
2.) Modify representation
3.) PUT modified representation to {resource}
This implies that all of step #2 would be handled by the client whereas on
the web when using web browsers and html forms step #2 is actually multiple
steps facilitated by the server. This makes me think that the concept of
"edit" as noun or "editform" might simply be shoehorning reality into the
REST theoretical model.
FYI, I present this not to discredit REST; no not at all. Instead, I am
trying to understand the limitations of the theoretical model, if such
exists, so as to be pragmatic when applying solutions and so as not to be a
cargo-cultist.
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
Thanks for all the responses. I've proposed a strawman over at <rest-cookbook-discuss@...> for those interested in continuing the discussion of defining a general CRUD module recipe. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On 5/31/07, Mike Schinkel <mikeschinkel@...> wrote: > > Stuart Charlton wrote: > > The big adoption will occur when OSS groups & vendors come > > out with a new breed of tools that don't just staple a bag > > labeled "REST" on the side, but actually provide an > > agent-oriented platform that's based on the architecture. > > All the effort has been too server-focused, in my view. > > As a side note, I'd really like to see the REST architecture style of > contrained interface and URLs for everything be adopted by some frameworks > used for the *DESKTOP* development. You are in good company: http://tirania.org/blog/archive/2005/Nov-26-2.html -joe -- Joe Gregorio http://bitworking.org
Joe Gregorio wrote: > > As a side note, I'd really like to see the REST > architecture style of > > contrained interface and URLs for everything be adopted by some > > frameworks used for the *DESKTOP* development. > > You are in good company: > > http://tirania.org/blog/archive/2005/Nov-26-2.html Very nice, thanks! Guess it's not a crazy idea after all... ;-) -Mike
Yep. and Keith has taken that over now. Brickbats welcome! :) If anyone has specific comments on Axis2 here, i'll fwd them over to axis-dev. thanks, dims On 6/3/07, Steve Loughran <steve.loughran.soapbuilders@...> wrote: > > > > > > > On 6/1/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > From the article: > > > > "Does WSDL 2.0 enable REST? > > The motivation of WSDL 2.0 HTTP binding is that it allows services to > > have both SOAP and HTTP bindings. The service implementation deals > > with processing application data, often represented as an XML > > element, and the service doesn't know whether that data came inside a > > SOAP envelope, HTTP GET, or HTTP POST. WSDL 2.0 HTTP binding enables > > you to expose a service as a resource to be invoked using HTTP > > methods. At the same time, you need to understand that HTTP binding > > doesn't enable you to implement a full REST style system. This is > > often debated by a lot of people, and it all depends on how much you > > believe in what REST can deliver." > > > > This at least seems to show the author is aware that what WSDL 2 > > supports is POX over HTTP, not REST. > > Eran is one of the Axis2 team...he's probably implemented that very feature. > -- Davanum Srinivas :: http://davanum.wordpress.com
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > I'm not sure there's been enough information provided about the application > in question to make any judgments or apply any preferences. Here's an > example, where having a "hackable" URI scheme would be nonsensical: > > http://en.ericjbowman.com/date;transform=1?iso=2007-05-25 > > I say it does have an orthogonal relationship to REST, in that if my /date > service had a hierarchical URI allocation scheme it would strongly imply > a hierarchical organization of the information space which just isn't there. > > -Eric > http://en.ericjbowman.com/date;transform=1?iso=2007-05-25 is an RPC way of doing things wouldn't something like http://en.ericjbowman.com/date/2007-05-25.iso/LongFormat.html be more RESTful. http://en.ericjbowman.com/date/2007-05-25.iso/ would return links to each of the different formats which can be returned by your service. In this way the service is self describing. At the moment someone has to guess what the possible parameters could be, The service would become more useful by becoming more hackable. What do you think ? -Eoin
On 6/2/07, Stefan Tilkov <stefan.tilkov@...> wrote:
> On Jun 2, 2007, at 11:34 PM, Mike Schinkel wrote:
>
> > To be RESTful, we GET "page/{page_id}" for display, PUT to "page/
> > {page_id}"
> > for an update, DELETE "page/{page_id}" for a delete, and GET "page/
> > list" for
> > a list. But how do we deal with getting an edit form before the
> > PUT? How do
> > we handle delete confirmation? How do we handle requesting a data
> > entry
> > form designed for a new page (POST to "page/new"?) Answer both
> > assuming
> > AJAX and also no AJAX, if you will.
>
> FWIW, Rails uses <collection>/<id>;edit to get the edit form, and
> links a Javascript function to pop up a confirmation dialog for
> DELETE (which is simulated by tunneling a hidden field through POST).
> The semicolon has been changed to a slash in the most recent versions
> (edge).
I recently went through the exercise of adding Rails-like collections
to Robaccia, which uses urls like:
GET /people list()
POST /people create()
GET /people/1 retrieve()
PUT /people/1 update()
DELETE /people/1 delete()
GET /people;create_form get_create_form()
GET /people/1;edit_form get_edit_form()
Note the use of 'create_form' instead of just 'create', similarly
for 'edit_form', as my way of emphasizing the "noun"-ness of the
resource.
Very long detailed writeup here:
http://bitworking.org/news/179/Gloves
-joe
--
Joe Gregorio http://bitworking.org
With regards to confirmation, I see at least 3 ways of implementing it. 1) Client Side : The Client simply requires the user to confirm an action before the request is sent to the server. 2) Use Transactions : You could use a RESTful style transaction as defined in "RESTful web services", Basically create a Transaction resource first, reference this transaction resource in each of the operations you perform, If you want to rollback, DELETE the transaction resource, or update the transaction resource to show it is committed. 3) Server side : Two Delete requests, The first one changes the state of the resource to awaiting deletion confirmation, A second Deletion request actually deletes the resource. 1 is the easiest and 3 is a little mad, I always like to have at least 3 ways of doing anything. 2 is probably the best way to implement this in the service itself. For simple confirmation I prefer option 1. -Eoin http://www.eoinprout.com
Eoinprout wrote: > With regards to confirmation, I see at least 3 ways of > implementing it. > > 1) Client Side : The Client simply requires the user to > confirm an action before the request is sent to the server. > > 2) Use Transactions : You could use a RESTful style > transaction as defined in "RESTful web services", Basically > create a Transaction resource first, reference this > transaction resource in each of the operations you perform, > If you want to rollback, DELETE the transaction resource, or > update the transaction resource to show it is committed. Can you give details of this, i.e. URLs and HTTP methods and any out of band information (i.e. not part of the URL) BTW, I looked for "RESTful web services" locally and couldn't find. I ordered it and am still waiting for it. > 3) Server side : Two Delete requests, The first one changes > the state of the resource to awaiting deletion confirmation, > A second Deletion request actually deletes the resource. > > 1 is the easiest and 3 is a little mad, I always like to have at least > 3 ways of doing anything. > 2 is probably the best way to implement this in the service itself. > > For simple confirmation I prefer option 1. > > -Eoin http://www.eoinprout.com -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
<mikeschinkel@...> wrote: > > Eoinprout wrote: > > 2) Use Transactions : You could use a RESTful style > > transaction as defined in "RESTful web services", Basically > > create a Transaction resource first, reference this > > transaction resource in each of the operations you perform, > > If you want to rollback, DELETE the transaction resource, or > > update the transaction resource to show it is committed. > > Can you give details of this, i.e. URLs and HTTP methods and any out of band > information (i.e. not part of the URL) > > BTW, I looked for "RESTful web services" locally and couldn't find. I > ordered it and am still waiting for it. > > -Mike Schinkel The example given in the book uses a banking case of transfer money from a Chequing account ID 11 to a savings account ID 55 and goes like this. 1) Create a transaction resource for an account transfer. POST /transaction/account-transfer The response gives the URI for the transaction /transaction/account-transfer/11a5 2) reduce the amount in the Chequing account 11 from 200 to 150 PUT /transaction/account-transfer/11a5/chequing/11 balance=150 3) increase the amount in the savings account 55 from 200 to 250 PUT /transaction/account-transfer/11a5/chequing/11 balance=250 4) To rollback the transaction DELETE /transaction/account-transfer/11a5 5) To Commit the transaction PUT /transaction/account-transfer/11a5 committed=true the books uses the "checking" spelling , but this is confusing so I'm using the "Chequing" spelling. I suggest another way could be 1) Create the transaction as before. 2) reduce the amount in the chequing account 11 from 200 to 150 PUT /chequing/11 transaction=/transaction/account-transfer/11a5 balance=150 3) increase the amount in the savings account 55 from 200 to 250 PUT /chequing/11 transaction=/transaction/account-transfer/11a5 balance=250 4) Rollback transaction as before 5) Commit transaction as before Anyone want to point out the pros/cons of either method ? - Eoin http://www.eoinprout.com/
Bob Haugen wrote: > > > On 6/1/07, Alan Dean <alan.dean@... > <mailto:alan.dean%40gmail.com>> wrote: > > On 6/1/07, Bill de hOra <bill@... <mailto:bill%40dehora.net>> > wrote: > > > ... WWW is the wrong environment for ACID semantics. > > > > Agreed. > > > > Two-phase commit over the web is brittle & highly latent. This is why > > WS-Transaction (AT) is a bad idea. > > > > http://en.wikipedia.org/wiki/WS-Transaction > <http://en.wikipedia.org/wiki/WS-Transaction> > > > > "Long-running" (aka compensatory) transactions are the way to go. > > > > http://en.wikipedia.org/wiki/Long_running_transaction > <http://en.wikipedia.org/wiki/Long_running_transaction> > > 2PC != ACID. > > I agree about ACID and WS-AT, but one of my previous transaction > mentors claimed that some form of 2PC is unavoidable for coordination > among independent agents. He was smarter than me, so I'll tentatively > believe it. I've seen a paper that said Paxos was the same protocol as 2PC for some number of actors; I don't know if that means 2PC is generally unavoidable. The general theorem I know of is the 5 packet handshake. For 5PH to work some cases need to be excluded to avoid "byzantine generals" kinds of problems, and you might need to make some assumptions (such as an "eventual arrival" if the channel is asymmetric). I liked it because it applies directly to point to point messaging problems with an unreliable channel (which is most reliable messaging scenarios). > Compensation is extremely difficult in many (maybe most) cases. Can > you really undo all of the effects of a set of distributed actions? You don't need to. Design an exception management system that kicks it out to a person to decide how to resolve it. They can roll forward. > In REST, probably uses a separate transaction resource, or at least > that's true of all the RESTful transaction proposals I've seen on this > list or in the recent book. Yep. For an exchange of value it's useful if the state of the exchange has separate identity to the thing of value. If there was a design patterns book for the Web, this would be one of them. cheers Bill
Comment below. On 6/4/07, eoinprout <eoin@...> wrote: > > The example given in the book uses a banking case of transfer money > from a Chequing account ID 11 to a savings account ID 55 and goes like > this. > > 1) Create a transaction resource for an account transfer. > POST /transaction/account-transfer > The response gives the URI for the transaction > /transaction/account-transfer/11a5 > > 2) reduce the amount in the Chequing account 11 from 200 to 150 > PUT /transaction/account-transfer/11a5/chequing/11 > balance=150 > > 3) increase the amount in the savings account 55 from 200 to 250 > PUT /transaction/account-transfer/11a5/chequing/11 > balance=250 > > 4) To rollback the transaction > DELETE /transaction/account-transfer/11a5 > > 5) To Commit the transaction > PUT /transaction/account-transfer/11a5 > committed=true > > the books uses the "checking" spelling , but this is confusing so I'm > using the "Chequing" spelling. > > I suggest another way could be > 1) Create the transaction as before. > 2) reduce the amount in the chequing account 11 from 200 to 150 > PUT /chequing/11 > transaction=/transaction/account-transfer/11a5 > balance=150 > 3) increase the amount in the savings account 55 from 200 to 250 > PUT /chequing/11 > transaction=/transaction/account-transfer/11a5 > balance=250 > 4) Rollback transaction as before > 5) Commit transaction as before > > Anyone want to point out the pros/cons of either method ? I have not read the book, and so am not sure I understand correctly. But the book method appears to support a provisional-final transaction pattern, where the PUTs before commit update provisional resources, and the real accounts are only updated upon commit. If I understand your proposal correctly, you are updating the real accounts before committing, and then reversing the updates upon rollback. If you update the real accounts before commit, what if somebody withdraws the money from account 55 before rollback (between steps 3 and 4)? (One example of the many potential problems with "compensation" or "undo" rollbacks.) Maybe an interim withdrawal is unlikely in a quick account transfer scenario, but how about an order offer-acceptance transaction? If the real order is created before acceptance without any sign of provisionality, it will look to the fulfillment systems like an accepted order, and the fulfillment system may trigger pick, pack and ship.
Interesting bit of pushback on WADL from Dare Obsanjano: http://www.25hoursaday.com/weblog/CommentView.aspx?guid=f88dc5a6-0aff-44ca-ba42-38c651612092 He argues that XSD mapping to native is code (and back) is a source of many problems, so if WADL encourages you to use XSD for a type system, then you've made a wrong turn already. I think I have to concur about the XSD side of things, it is something to strongly discourage. Given the number of existing WADL documents to be compatible with is zero, it may be time to push for it to be relax-ng *only*. That is, if you expect WADL and the corresponding types to be handwritten. If instead they get generated from some service interface, then you are back in the same mess that we got from reverse-engineered WSDL files. Has anyone tried to write a WADL description for any modern service, like Amazon EC2 REST API? -steve
All, Looking for best practices here. Or, more accurately, discovering conflicts between best practices. A best practice (and a constraint) of REST is to provide links in a returned representation to other useful states of the application (HATEOAS). Another best practice is that a server should accept messages in the same format as it sends them, such that a user can get a representation, modify it a little, and PUT it back. It seems to me that these two practices are in conflict. Imagine an expense report application where the following is returned: GET /users/placey/expenses/123 ... <ExpenseReport xml:base="http://example.com/expenses" xmlns:xlink="http://www.w3.org/1999/xlink"> <Submitter xlink:href=�http://employees.example.com/placey�>Peter Lacey</Submitter> <ChargeTo xlink:href=�/departments/sales�>Sales</ChargeTo> <ExpenseItem> <Date>2007-06-01</Date> <Description>Airfare</Description> <Amount>500.34</Amount> </ExpenseItem> </ExpenseReport> Even though the Submitter element is redundant with information in the URL, I want to provide that link to the "employee" system so that the client can get more detail on the employee if they want to. Similarly, I want to provide a link to information about the department being charged (maybe it has budget information). However, if I were to PUT this back to the server, I wouldn't want the client to specify the submitter, and, more importantly, I wouldn't want the client to be telling me what the URL to the submitter's employee record is. Ditto, I do not want the client to tell me the URL of the sales department. So, does one practice (connectedness) trump the other (representational symmetry)? Or should the server accept messages with unwanted URLs and fields and simply toss them away? Pete
On 5/30/07, Alan Dean <alan.dean@...> wrote: > Whilst I know that many, if not most, of the readers of the list > probably don't use Microsoft technologies - for those that do or are > interested here is a podcast on Channel9: > > "A conversation with Justin Smith about syndication and REST in the > Orcas release of Windows Communication Foundation" > > http://channel9.msdn.com/ShowPost.aspx?PostID=311356 Sounds a bit like Java Rome, though there are 'scenarious' where WS protocols 'for security and intermediation' are layered on top of the RSS feed. 'takes the concepts that WCF uses (that are really tied to SOAP) and puts them out over HTTP." "Its very easy to take a parameter and a method and project that into the URI". Still, WCF will add GET support; content-type controls what you get back. There are some regrets at the end of the talk about this not being in Vista. Oh, the irony. The reason there's no REST support in WCF is precisely because of the effort invested in classic WCF instead, a lot of which isnt even mainstream interoperable WS-* over HTTP. Just think how different things would be if they had chosen a different path. The talk has a bit of a firefighting feel to it, an unexpected change of direction. However, if there is one thing that MS has excelled in in the past, it is in changing direction successfully. -steve
I can't disagree that XSD code generation causes problems, but WADL is trying to fix and stabilize more than just the data formats. WADL (and any IDL in the Web space) is trying to publish the exact or templated URIs as well. The simplest example I can think of is an checkout form. Today it's one single form with all the values, tomorrow it's a three-step form. From one URI to three. Any clients generated by an IDL against the first single version of the form will promptly break. The IDL strategy to preventing upgrade failures is to now provide two services, one that conforms to the published IDL and another that is evolved. Maintaining two entry points to a service is expensive and this example change shouldn't cause such an expensive undertaking. Instead of even trying to "fix" the URI, we should use a data driven approach that enables machine agents to a) follow hypermedia state changes dynamically, and b) choose between different links and forms on a page. Instead of IDL that defined fixed URI's for say, logout, addItem, and checkout, the agreement between client and server would be a shared data type(s). The machine agent would pick the <form> that had class="checkout" instead of the one with class="logout", fill in any values it recognized, and submit it. The response page would either have a success data value, a failure data value, or another <form class="checkout"> that should be followed. Protocol as data, not interface. John Heintz http://johnheintz.blogspot.com/2007/05/does-rest-need-dl.html On 6/4/07, Steve Loughran <steve.loughran.soapbuilders@...> wrote: > Interesting bit of pushback on WADL from Dare Obsanjano: > http://www.25hoursaday.com/weblog/CommentView.aspx?guid=f88dc5a6-0aff-44ca-ba42-38c651612092 > > He argues that XSD mapping to native is code (and back) is a source of > many problems, so if WADL encourages you to use XSD for a type system, > then you've made a wrong turn already. > > I think I have to concur about the XSD side of things, it is something > to strongly discourage. Given the number of existing WADL documents to > be compatible with is zero, it may be time to push for it to be > relax-ng *only*. That is, if you expect WADL and the corresponding > types to be handwritten. If instead they get generated from some > service interface, then you are back in the same mess that we got from > reverse-engineered WSDL files. > > Has anyone tried to write a WADL description for any modern service, > like Amazon EC2 REST API? > > -steve > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
Peter Lacey <placey@...> writes: > GET /users/placey/expenses/123 > ... > <ExpenseReport xml:base="http://example.com/expenses" > xmlns:xlink="http://www.w3.org/1999/xlink"> > <Submitter xlink:href=“http://employees.example.com/placey”>Peter > Lacey</Submitter> > <ChargeTo xlink:href=“/departments/sales”>Sales</ChargeTo> > <ExpenseItem> > <Date>2007-06-01</Date> > <Description>Airfare</Description> > <Amount>500.34</Amount> > </ExpenseItem> > </ExpenseReport> > > Even though the Submitter element is redundant with information in the > URL, I want to provide that link to the "employee" system so that the > client can get more detail on the employee if they want to. Similarly, I > want to provide a link to information about the department being charged > (maybe it has budget information). However, if I were to PUT this back > to the server, I wouldn't want the client to specify the submitter, and, > more importantly, I wouldn't want the client to be telling me what the > URL to the submitter's employee record is. Ditto, I do not want the > client to tell me the URL of the sales department. Why wouldn't you want those two things? Presumably you would validate them based on some authentication... but if they're in the document model they seem quite reasonable to me. Something to consider here is that someone might be authorized to submit a form for someone else. You might want to look at XForms because it has quite a lot to say about this scenario. > So, does one practice (connectedness) trump the other (representational > symmetry)? Or should the server accept messages with unwanted URLs and > fields and simply toss them away? It could do. Or it might fail with some kind of validation error. Or it might accept the details and try and do something with them as I suggest above. -- Nic Ferrier http://www.tapsellferrier.co.uk
On Jun 4, 2007, at 10:52 PM, Peter Lacey wrote: > So, does one practice (connectedness) trump the other > (representational > symmetry)? Or should the server accept messages with unwanted URLs and > fields and simply toss them away? How about using different content types for the representations with and without the URLs that you want to have created by the server? In any case, I believe the PUT restriction is in fact not that strict. At least the HTTP spec doesn't define that what has been PUT must be exactly what's retrieved by a subsequent GET (unless I'm missing something). Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
--- In rest-discuss@yahoogroups.com, "Bob Haugen" <bob.haugen@...> wrote:
>
> Comment below.
>
> On 6/4/07, eoinprout <eoin@...> wrote:
> > I suggest another way could be
> > 1) Create the transaction as before.
> > 2) reduce the amount in the chequing account 11 from 200 to 150
> > PUT /chequing/11
> > transaction=/transaction/account-transfer/11a5
> > balance=150
> > 3) increase the amount in the savings account 55 from 200 to 250
> > PUT /chequing/11
> > transaction=/transaction/account-transfer/11a5
> > balance=250
> > 4) Rollback transaction as before
> > 5) Commit transaction as before
> >
> > Anyone want to point out the pros/cons of either method ?
>
> I have not read the book, and so am not sure I understand correctly.
>
> But the book method appears to support a provisional-final transaction
> pattern, where the PUTs before commit update provisional resources,
> and the real accounts are only updated upon commit. If I understand
> your proposal correctly, you are updating the real accounts before
> committing, and then reversing the updates upon rollback.
>
> If you update the real accounts before commit, what if somebody
> withdraws the money from account 55 before rollback (between steps 3
> and 4)?
>
> (One example of the many potential problems with "compensation" or
> "undo" rollbacks.)
>
> Maybe an interim withdrawal is unlikely in a quick account transfer
> scenario, but how about an order offer-acceptance transaction? If the
> real order is created before acceptance without any sign of
> provisionality, it will look to the fulfillment systems like an
> accepted order, and the fulfillment system may trigger pick, pack and
> ship.
Based on what I wrote your assumption is correct but it isn't what I
was thinking. I'm guilty of making an assumption, sorry.
The actual resources would not be updated until the transaction is
committed, The reason I suggested an alternative to the books example
is that the structure of the URI implied that only certain operations
can be transactional i.e. ones defined as transactional by the service
developer, like account Transfers
I prefer to think that every operation can be part of a transaction
On seconds thoughts using the URI may be better with some slight
modification.
1) Create transaction
POST /transactions/
The Response returns the URI of the transaction
/transactions/{transactionID}
2) Perform any operations to included in the transaction
OPERATION /transactions/{transactionID}/{Resource URL}
Commit and roll back would be as before.
This allows any operation on any resource to be part of the transaction.
Eoin - http://www.eoinprout.com/
On 6/4/07, Peter Lacey <placey@...> wrote: > So, does one practice (connectedness) trump the other (representational > symmetry)? Or should the server accept messages with unwanted URLs and > fields and simply toss them away? It's not "tossing them away" if it's a request to set the field to the value it would be anyway. That's a successful PUT. Mark.
The theoretical model of REST does not say that a resource is
synonymous with a single block of data held inside the server, a
resource can be much more than that. Resources are not records in a
database table, they are a view of your system /from the outside/.
It's okay to have a resource for a 'public view', and a 'private view'
and a 'private view for editing' and 'public view for editing' and a
'private view for editing by account manager', etc. This is what
really happens on the Web and this is what ReST describes. The
hypermedia as the engine of application state (heapps - since i can't
pronounce hateoas) even describes how to learn at runtime what those
other interesting resources might be. Humans only need blue underlined
text to pick out interesting links, but machines may need more hints.
> Thanks. So the "editform" et. al. is a noun. Hmm. I guess if "edit" where
> used if could also be used as a noun.
>
> I wonder if this is one of the places where the theoretical model of REST
> breaks down for use in the real world? Pure REST would say:
>
> 1.) GET {resource} as representation
> 2.) Modify representation
> 3.) PUT modified representation to {resource}
>
> This implies that all of step #2 would be handled by the client whereas on
> the web when using web browsers and html forms step #2 is actually multiple
> steps facilitated by the server. This makes me think that the concept of
> "edit" as noun or "editform" might simply be shoehorning reality into the
> REST theoretical model.
>
> FYI, I present this not to discredit REST; no not at all. Instead, I am
> trying to understand the limitations of the theoretical model, if such
> exists, so as to be pragmatic when applying solutions and so as not to be a
> cargo-cultist.
>
> --
> -Mike Schinkel
> http://www.mikeschinkel.com/blogs/
> http://www.welldesignedurls.org
> http://atlanta-web.org - http://t.oolicio.us
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
Mike Dierken wrote: > The theoretical model of REST does not say that a resource is > synonymous with a single block of data held inside the > server, a resource can be much more than that. Resources are > not records in a database table, they are a view of your > system /from the outside/. You wouldn't otherwise know, but I have studied the concept of resources ad-nausem in my quest to really understand URLs, URI Opacity, REST, et. al. I especially like how the debates on what defines a resource turned up not definition better than one equivalent to a certain US supreme court justice's definition of pornography: "I, I, I can't define it, but I KNOW IT WHEN I SEE IT!" :-) But I think you missed my point, and the point was that the combined "edit" & "edit_form" seem to be part of process flow that REST does not define. REST seems to define the retrieval of the data and the update of the data, but not the interim process. Sure we can use REST to handled that interim process but it seems like to it contrived, at least to me. Again, I'm not saying it invalidates REST; instead I'm saying that it would mean that architecting a proper REST system requires more thought to implement that the high level thesis has defined. > It's okay to have a resource for a 'public view', and a 'private view' > and a 'private view for editing' and 'public view for > editing' and a 'private view for editing by account manager', > etc. This is what really happens on the Web and this is what > ReST describes. This seems to be to be apples and oranges compared to what I was saying, i.e. part of a process flow vs security enabled views. That said, can you explain how one would communicate to the same URL on the server that one view should be public and another private? Seems to me it should be a different URL. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Hi Mike
I've modified your example by adding a transaction around the delete
operation,
This takes care of the confirmation ability you were talking about in
your previous posts.
Using transactions for a single operation just to ask the user to
confirm an operation is
over-kill but in this case it's just to illustrate a point.
GET /items -> Retrieves a list of items
GET /items/add -> Retrieves a form for entering a new
item,
On Submit: POST /items/{id}
POST /items -> Inserts a new item, 302 redirects
to GET
/items/{id}
GET /items/{id}/edit -> Retrieves a form for editing an
existing
item, On Submit: PUT /items/{id}
PUT /items/{id} -> Updates an existing item, 302 redirects
to GET /items/{id}
GET /items/{id}/delete -> Retrieves a form to confirm delete
of an
existing item, On Submit: DELETE /items/{id}
POST /transactions -> Creates a new transaction
/transactions/{TransID}
DELETE /transactions/{TransID}/items/{id} -> Deletes an existing item
PUT /transactions/{TransID} committed=true -> Commits the transaction
which now deletes the item resource
to GET /items/
GET /items/{id} -> Retrieves an existing item
Eoin - http://www.eoinprout.com/
Stefan Tilkov wrote: > In any case, I believe the PUT restriction is in fact not that > strict. At least the HTTP spec doesn't define that what has been PUT > must be exactly what's retrieved by a subsequent GET (unless I'm > missing something). Quite the opposite. If the spec defined that it would be able to allow for write-through caching. The spec says it can't allow for write-through caching because it doesn't know that what has been PUT will be retrieved by a subsequent GET (also, there are a whole bunch of cases where even systems that did have that as a possibility would have other possibilities - it's not like resources can only have one representation).
I am interested in hearing how people use the query string to constrain a GET. For instance if i want to get a list of cars registered between 03/2000 and 09/2006 /cars?registeredDate$from=03-2000®isteredDate$to=09-2006 /cars?registeredDate!from=03-2000®isteredDate!to=09-2006 /cars?registeredDate=[03-2000,06-2006] How are people encoding expressions such as >, >=, < and <= within a GET request. cheers </jima>
There's no need for a standard URL format and really no need to directly encode a terse query language in URLs. You can if you want of course, but really most anything will do, like /cars?registeredBefore=2006-09®isteredAfter=2000-03 The handy part is using a standard form that generates the URL - HTML has something that works. When more clients support WADL or some other format, then the possible client-generated URLs will become richer. In many applications, you can avoid teaching the client how to generate the URLs by providing links and just teach the client how to find the links within a document. > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of jalateras > Sent: Monday, June 04, 2007 9:22 PM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Using Query String in GET method > > I am interested in hearing how people use the query string to > constrain a GET. > > For instance if i want to get a list of cars registered > between 03/2000 and 09/2006 > > /cars?registeredDate$from=03-2000®isteredDate$to=09-2006 > /cars?registeredDate!from=03-2000®isteredDate!to=09-2006 > /cars?registeredDate=[03-2000,06-2006] > > How are people encoding expressions such as >, >=, < and <= > within a GET request. > > > cheers > </jima> > > > > > Yahoo! Groups Links > > >
On 6/4/07, John D. Heintz <jheintz@...> wrote: > > Protocol as data, not interface. I totaly agree - we don't need WADL (or some other DL). In fact I think it is positively a bad thing. > > John Heintz > http://johnheintz.blogspot.com/2007/05/does-rest-need-dl.html Regards, Alan Dean http://thoughtpad.net/alan-dean
> >http://en.ericjbowman.com/date;transform=1?iso=2007-05-25 is an RPC >way of doing things > How so? RPC communication is stateful, in my case, "each request from client to server... contain(s) all of the information necessary to understand the request, and (does not) take advantage of any stored context on the server" which is a stateless, RESTful interaction. > >wouldn't something like >http://en.ericjbowman.com/date/2007-05-25.iso/LongFormat.html >be more RESTful. > Not really, URI design is only orthogonal to REST. My way is not the only way. > >http://en.ericjbowman.com/date/2007-05-25.iso/ >would return links to each of the different formats which can be >returned by your service. >In this way the service is self describing. >At the moment someone has to guess what the possible parameters could >be, The service would become more useful by becoming more hackable. > Well, that's another way to do it. But my way is still self-describing because there is a service document which clearly lays out the existence and usage of parameters for each language the service is available in, no guesswork involved. I'm a bit confused on where self-describing services enters into REST, though. More usable to whom? If I want to write an XSLT stylesheet which takes the primary output and transforms it into RFC 1123 format served as text/plain then I just do this after it's written: curl -iT "en.2.xsl" http://ericjbowman.com/date/en.2.xsl Which does a PUT. I get a 201 Created if it's new, if I've overwritten an existing file I get a 204 No Content response. Now, I can GET this: http://en.ericjbowman.com/date;transform=2?iso=2007-05-25 That seems pretty usable to me, as the maintainer of the service, since no existing consumers of the service need to be rewritten to accomodate it, no new name must be devised which might not apply in the future, and all I have to do (if I even want to) is update my service documents to reflect that the English version now has this capability (which doesn't make sense for any other language). Anyway, in terms of REST constraints, it's the messages that are to be self-descriptive, not the service itself. Not hierarchical, but still hackable: http://en.ericjbowman.com/date?iso=2007-05-25 Also, not RPC because it's perfectly cacheable. Yes, I can set an Etag, but we are waiting for Caucho to add MD5 capability to Resin (in the works) so we can set Etag=MD5. If this were RPC instead of GET the output would not be cacheable at all. Instead, I've optimized GET, which is very RESTful. -Eric
Ross M Karchner wrote: > On 5/28/07, Bill de hOra <bill@...> wrote: >> I think one reason is that without conneg, you end up providing a URI >> for each supported format, and URI proliferation is hardly a good thing. >> A few systems do that now; the Zimbra API would be one, moinmoin is >> another. Here's a simple example: >> >> xhtml: >> <http://www.citizensinformation.ie/categories/money-and-tax/tax/duties-and-vat/stamp-duty-on-financial-cards> >> >> atom: >> <http://www.citizensinformation.ie/categories/money-and-tax/tax/duties-and-vat/stamp-duty-on-financial-cards/entry.xml> >> > > Is there any precedent or value in combining the techniques? I guess > that would mean issuing a redirect in response to the negotiation > headers. > > Using the above example, perhaps if a client requests the xhtml > document but prefers or only accepts atom, the server responds with a > 303 and a location header pointing to the atom document. Yes, but you still have conneg when you do this.
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > > > >http://en.ericjbowman.com/date;transform=1?iso=2007-05-25 is an RPC > >way of doing things > > > > How so? RPC communication is stateful, in my case, "each request from > client to server... contain(s) all of the information necessary to > understand the request, and (does not) take advantage of any stored context > on the server" which is a stateless, RESTful interaction. It looks like RPC to me because it looks like your making a function call rather then requesting the a resource. RPC does not have to be stateful. > > > >wouldn't something like > >http://en.ericjbowman.com/date/2007-05-25.iso/LongFormat.html > >be more RESTful. > > > > Not really, URI design is only orthogonal to REST. My way is not the only > way. > > > > >http://en.ericjbowman.com/date/2007-05-25.iso/ > >would return links to each of the different formats which can be > >returned by your service. > >In this way the service is self describing. > >At the moment someone has to guess what the possible parameters could > >be, The service would become more useful by becoming more hackable. > > > > Well, that's another way to do it. But my way is still self-describing > because there is a service document which clearly lays out the existence > and usage of parameters for each language the service is available in, no > guesswork involved. I'm a bit confused on where self-describing services > enters into REST, though. One way REST is self describing is way a user can traverse links from one resource to another related resource rather then knowing the rules on how to construct the URLs. And you have things like OPTIONS which is supposed to say what operations are allowed on a particular resource. > More usable to whom? If I want to write an XSLT stylesheet which takes the > primary output and transforms it into RFC 1123 format served as text/plain > then I just do this after it's written: > > curl -iT "en.2.xsl" http://ericjbowman.com/date/en.2.xsl > > Which does a PUT. I get a 201 Created if it's new, if I've overwritten an > existing file I get a 204 No Content response. Now, I can GET this: > > http://en.ericjbowman.com/date;transform=2?iso=2007-05-25 > > That seems pretty usable to me, as the maintainer of the service, since no > existing consumers of the service need to be rewritten to accomodate it, no > new name must be devised which might not apply in the future, and all I have > to do (if I even want to) is update my service documents to reflect that the > English version now has this capability (which doesn't make sense for any > other language). > > Anyway, in terms of REST constraints, it's the messages that are to be > self-descriptive, not the service itself. Not hierarchical, but still > hackable: > > http://en.ericjbowman.com/date?iso=2007-05-25 > > Also, not RPC because it's perfectly cacheable. Yes, I can set an Etag, > but we are waiting for Caucho to add MD5 capability to Resin (in the works) > so we can set Etag=MD5. If this were RPC instead of GET the output would > not be cacheable at all. Instead, I've optimized GET, which is very RESTful. > > -Eric > some systems won't/can't cache GETs which contain parameters. But this is more of a flaw with those systems then any flaw in your method. But as you say they is no one way, This is only my opinion. Eoin
Nic James Ferrier wrote: > Peter Lacey <placey@...> writes: > > >> However, if I were to PUT this back >> to the server, I wouldn't want the client to specify the submitter, and, >> more importantly, I wouldn't want the client to be telling me what the >> URL to the submitter's employee record is. Ditto, I do not want the >> client to tell me the URL of the sales department. >> > > Why wouldn't you want those two things? > It feels wrong to have the client providing information to the server that the server is authoritative on. Similarly, it seems wrong to have the client provide data elements that the server doesn't care about. > Presumably you would validate them based on some authentication... but > if they're in the document model they seem quite reasonable to me. > > Something to consider here is that someone might be authorized to > submit a form for someone else. > For this discussion I'm eliminating the nuances of authN/authZ. Pete
> > So, does one practice (connectedness) trump the other > > (representational > > symmetry)? Or should the server accept messages with unwanted URLs and > > fields and simply toss them away? > > How about using different content types for the representations with > and without the URLs that you want to have created by the server? Ermm, maybe. Seems awkward, though. > In any case, I believe the PUT restriction is in fact not that > strict. At least the HTTP spec doesn't define that what has been PUT > must be exactly what's retrieved by a subsequent GET (unless I'm > missing something). Absolutely true. But it's a good best practice. Especially for generic clients which can retrieve the contents of any URL, present it for editing, and allow for it to be posted back. I'd rather not give it up unless I have to.
Mark Baker wrote: > On 6/4/07, Peter Lacey <placey@...> wrote: >> So, does one practice (connectedness) trump the other (representational >> symmetry)? Or should the server accept messages with unwanted URLs and >> fields and simply toss them away? > > It's not "tossing them away" if it's a request to set the field to the > value it would be anyway. That's a successful PUT. > > Mark. > Now this answer I like. Frequently it seems that REST is just a matter of changing your perspective. Pete
Peter Lacey wrote: > It feels wrong to have the client providing information to the server > that the server is authoritative on. Similarly, it seems wrong to have > the client provide data elements that the server doesn't care about. If a client is doing a PUT it is asserting something about the resource. The server may be authoritative on that resource, but the server can authoritatively say "this client knows what it is talking about". Now, when we consider the concept of a representation. If a represenation has links to other resources, that represenation is asserting some sort of relationship between the resource it represents and another resource (whether semantically significant or a matter of a representation's implementation). A client that knows what it is talking about is in as much of a position to make such an assertion as anything else. Server's can also say, "this client knows what it is talking about, but I'd phrase this bit differently, and I know better than it does on that bit".
eoinprout wrote: > --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: >>> http://en.ericjbowman.com/date;transform=1?iso=2007-05-25 is an RPC >>> way of doing things >>> >> How so? RPC communication is stateful, in my case, "each request from >> client to server... contain(s) all of the information necessary to >> understand the request, and (does not) take advantage of any stored > context >> on the server" which is a stateless, RESTful interaction. > > It looks like RPC to me because it looks like your making a function > call rather then requesting the a resource. > > RPC does not have to be stateful. Treat the URI as opaque. Verbs in a URI might be a bad smell, a very bad smell, but there's nothing RESTful or unRESTful about them. What matters is which verb counts: the method verb (in the case of HTTP, GET, PUT, &c.) or the one in the identifier. A URI is just an identifier for a resource, and as long as clients are able to treat it as such, there's no problem with having anything it it. That aside, that's one ugly URI! >> Well, that's another way to do it. But my way is still self-describing >> because there is a service document which clearly lays out the >> existence and usage of parameters for each language the service is >> available in, no guesswork involved. I'm a bit confused on where >> self-describing services enters into REST, though. As an addendum to what Eoin wrote, REST is self-describing because responses and requests included information in them, in the case of HTTP, Content-Type and other headers, which tells the client and server how to parse the message. One of the reasons why service documents don't really gel with REST is that clients aren't meant to care what the identifiers look like. All they're meant to care about is that there are URIs present that can be dereferenced. Where this isn't the case, you've got two choices: hardwire the clients or get the server to tell the client how to build the URIs by including a template _in the response_. This keeps the client and server decoupled from one another--a service document is a form of strong coupling--and allows the server and client to vary without breaking. > One way REST is self describing is way a user can traverse links from > one resource to another related resource rather then knowing the rules > on how to construct the URLs. > And you have things like OPTIONS which is supposed to say what > operations are allowed on a particular resource. Pity OPTIONS is so underspecified though. >> Anyway, in terms of REST constraints, it's the messages that are to be >> self-descriptive, not the service itself. Once the messages are self-descriptive, so too is the service itself. K. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
Steve Loughran wrote: > > Interesting bit of pushback on WADL from Dare Obsanjano: > http://www.25hoursaday.com/weblog/CommentView.aspx?guid=f88dc5a6-0aff-44ca-ba42-38c651612092 > > He argues that XSD mapping to native is code (and back) is a source of > many problems, so if WADL encourages you to use XSD for a type system, > then you've made a wrong turn already. I'm thinking if you've decided on mapping your data to XML in the first place, you've made a wrong turn. XML is a poor serialization format, because it has little direct mapping to 'data'. It's great for documents, but as programmmers, we deal with data, we design data (or try). Expecting programmers to design well thought out documents is too much to ask, IMHO. Consider JSON. -- Patrick Mueller http://muellerware.org
Patrick Mueller wrote:
>
> Consider JSON.
Someone just asked me, w/r/t to the referenced post:
Ok, I'll bite. How does JSON make life better?
JSON is a self-descriptive data representation. It supports all the
basic, atomic building blocks of data: booleans, numbers, strings. And
provides composite structures of arrays and maps. Maps can also be
viewed as 'objects', where they keys of the map are the properties of
the object; in fact, that's exactly how JavaScript treats them.
XML is great for documents. Which are hard to map into data; you have
to actually think about how to do it right. Attributes or sub-elements?
Explicit list structures, or just multiple elements to represent a list?
JSON is great for data. Which is easy to map into data.
--
Patrick Mueller
http://muellerware.org
Patrick Mueller <pmuellr@...> wrote:
>
> Patrick Mueller wrote:
> >
> > Consider JSON.
>
> Someone just asked me, w/r/t to the referenced post:
>
> Ok, I'll bite. How does JSON make life better?
>
> JSON is a self-descriptive data representation. It supports all the
> basic, atomic building blocks of data: booleans, numbers, strings. And
> provides composite structures of arrays and maps. Maps can also be
> viewed as 'objects', where they keys of the map are the properties of
> the object; in fact, that's exactly how JavaScript treats them.
>
> XML is great for documents. Which are hard to map into data; you have
> to actually think about how to do it right. Attributes or
sub-elements?
> Explicit list structures, or just multiple elements to represent a
list?
>
> JSON is great for data. Which is easy to map into data.
>
Actually I meant to send the question to the list (darn web
interface), so thanks for forwarding the response.
I'm sorry, but my mind is too stuborn to see the difference between
the two. Are you saying that JSON is better b/c it encodes the type
of a value (e.g. bool, numbe, string) in its representation? Isn't
there otherwise a 1-to-1 mapping b/w JSON and XML?
One thing that always bugged me about objects was that I couldn't
encapsualte sub-properties without resorting to other objects. For
instance, doing something like:
doc.settings.alignment = "left";
Here, I would need to have a 'doc' object and a 'settings' object that
contains a reference to "left". This bugs me because the 'settings'
kind of object might not be useful anywhere else (I realize that in
this example that might not be the case, but let's stick with it for
argument sake).
In XML, I don't have this limitation. Instead, I can say
doc[settings/alignment] = "left";
It's all part of one object (or document; frankly, I don't see the
difference between the two anymore, except that the latter seems to be
an evolution of the former).
Could you give a concrete example of where JSON makes life better?
- Steve
--------------
Steve G. Bjorg
http://www.mindtouch.com
http://www.opengarden.org
Patrick Mueller wrote: > XML is great for documents. Which are hard to map into data; you have > to actually think about how to do it right. Attributes or sub-elements? > Explicit list structures, or just multiple elements to represent a list? > > JSON is great for data. Which is easy to map into data. XML is good for data you will map to documents though (if you are a whiz at whatever language is doing the work and lousy at XSLT you will obviously have different thresholds than if you can write enough code to pump something into a transform but are a whiz at XSLT). JSON isn't too hot for languages other than javascript if you have XML parsers to hand by not JSON parser. I tend to find that if I'm writing javascript then I'm also producing XML documents or fragments out of what I'm receiving, so I tend to favour XML. However, JSON does indeed come into its own for anything where just having a javascript object is what you want.
Jon Hanna wrote: > XML is good for data you will map to documents though (if you are a whiz > at whatever language is doing the work and lousy at XSLT you will > obviously have different thresholds than if you can write enough code to > pump something into a transform but are a whiz at XSLT). JSON isn't too > hot for languages other than javascript if you have XML parsers to hand > by not JSON parser. > > I tend to find that if I'm writing javascript then I'm also producing > XML documents or fragments out of what I'm receiving, so I tend to > favour XML. However, JSON does indeed come into its own for anything > where just having a javascript object is what you want. I'm not a fan of JSON because it's JavaScript; I'm a fan of JSON because it's a data structure. There are plenty of JSON munching libraries available for various languages; scroll to the bottom of: http://json.org/ -- Patrick Mueller http://muellerware.org
> > But I think you missed my point, and the point was that the combined "edit" > & "edit_form" seem to be part of process flow that REST does not define. > REST seems to define the retrieval of the data and the update of the data, > but not the interim process. The 'hypermedia as the engine of application state' is the part of REST that describes this. The 'uniform interface' aspect talks about GET/PUT/DELETE of a single resource, but it's the collection of resources that make up the full application and REST describes how that collection of resources is used. > > It's okay to have a resource for a 'public view', and a 'private view' > > and a 'private view for editing' and 'public view for > > editing' and a 'private view for editing by account manager', > > etc. This is what really happens on the Web and this is what > > ReST describes. > > This seems to be to be apples and oranges compared to what I was saying, > i.e. part of a process flow vs security enabled views. A process flow that involved multiple pages (each of which is a distinct resource) can look very similar to 'views' of a single block of data. Go to any site with user profiles and you'll see a page for the 'public profile' and page for the 'edit my profile' - two resources, same database record(s). > > That said, can you explain how one would communicate to the same URL on the > server that one view should be public and another private? Seems to me it > should be a different URL. They would be different URLs, I wasn't suggesting otherwise.
Jon Hanna <jon@...> writes:
> Patrick Mueller wrote:
>> XML is great for documents. Which are hard to map into data; you have
>> to actually think about how to do it right. Attributes or sub-elements?
>> Explicit list structures, or just multiple elements to represent a list?
>>
>> JSON is great for data. Which is easy to map into data.
>
> XML is good for data you will map to documents though (if you are a whiz
> at whatever language is doing the work and lousy at XSLT you will
> obviously have different thresholds than if you can write enough code to
> pump something into a transform but are a whiz at XSLT). JSON isn't too
> hot for languages other than javascript if you have XML parsers to hand
> by not JSON parser.
>
> I tend to find that if I'm writing javascript then I'm also producing
> XML documents or fragments out of what I'm receiving, so I tend to
> favour XML. However, JSON does indeed come into its own for anything
> where just having a javascript object is what you want.
I have a different perspective on this.
I like to use JSON when I'm dealing with a web object inside a
programming language - pretty much any programming language but I
mostly only use languages that are JSON friendly.
It's hard to write XML in Java, Python or Ruby. You basically have to
write it as strings and lose lots of editing goodness and keep making
syntax mistakes.
But editing JSON in those languages is usually a matter of using the
native language features which all have good syntax support.
For example, here's a bit of Python code I've written just recently:
return {"abbr":
{"@class": "user",
"@title": strip_openid_url(openid_profile.openid),
"div":
[{"span":
{"@class": "nickname",
"span": openid_profile.nick_name }},
{"ul":
{"@class": "mugshots",
"div":
[{"li":
[{"img":
{"@class": "mugshot",
"@alt": mugshot.name,
"@src": file_field_get_url(mugshot.shot) }},
{"img":
{"@class": "avatar",
"@alt": mugshot.name,
"@src": "/sitemedia/%s" % (mugshot.thumb)}}]} for mugshot in openid_profile.mugshot_set.all()]}}]}}
pretty obvious what that is doing.
(in the above instance it's actually being transformed into XML and
passed through XSLT to the user - but I also have the option of
delivering it to a UA directly as JSON)
HTML and XML I find I use directly when I can store a template
somewhere or where I have a process that produces SAX events or DOM
trees and I'm not altering it much.
--
Nic Ferrier
http://www.tapsellferrier.co.uk
Steve G. Bjorg wrote:
> I'm sorry, but my mind is too stuborn to see the difference between
> the two. Are you saying that JSON is better b/c it encodes the type
> of a value (e.g. bool, numbe, string) in its representation? Isn't
> there otherwise a 1-to-1 mapping b/w JSON and XML?
I tend to think that there's no obvious mapping between JSON and XML; or
more appropriately, the same type of mapping between JSON and XML as you
would get between C data structures and XML. Almost nothing.
But more specifically, JSON has first class support for lists (arrays).
How do you represent a list in XML? There are multiple ways. Which
you will have to describe to someone. The most compact representations
usually aren't self descriptive (ie, <ul><li>... is pretty self
descriptive, but not terribly compact).
> One thing that always bugged me about objects was that I couldn't
> encapsualte sub-properties without resorting to other objects. For
> instance, doing something like:
> doc.settings.alignment = "left";
>
> Here, I would need to have a 'doc' object and a 'settings' object that
> contains a reference to "left". This bugs me because the 'settings'
> kind of object might not be useful anywhere else (I realize that in
> this example that might not be the case, but let's stick with it for
> argument sake).
>
> In XML, I don't have this limitation. Instead, I can say
> doc[settings/alignment] = "left";
> It's all part of one object (or document; frankly, I don't see the
> difference between the two anymore, except that the latter seems to be
> an evolution of the former).
I don't grok the example; perhaps it's XPath; which is part of my
problem here; I've never bothered to fully learn all the minor XML
tooling dialects: XML schema, Relax NG, XPath, etc. Enough to scrape
by. Programming languages though ... no problem, I live and breath them.
Having 'objects' poof into life like that is obviously quite nice, and
something most programming languages don't directly support.
> Could you give a concrete example of where JSON makes life better?
The best example of making life better is getting people out of the
document design business. I suppose I've suffered through too much
crappy XML in my life, that I've come to see XML document design as an
art more than a science. Data design is usually easier, or at least
more obvious.
But here's a concrete example I used recently:
http://muellerware.org/projects/twit-growl/
Download the twit-growl.py file, then scroll down to line 89, where I
'load' the JSON into Python objects. The value for me is that I can
look at the JSON output when debugging, and figure out exactly what the
structure is that I'll be needing to use to access the data. With XML,
you can either build XPath queries (which I'm no good at), or use some
kind of DOM accessor. In both of those XML cases, I just had to make a
mental switch to the XML model, instead of being able to stay in the
data model the whole time. XML just got in the way.
A better example would include accessing non-string values out of the
JSON; in XML, you'll need to do some conversion from strings to
[whatever]. In JSON, you get it for free (for numbers, booleans,
strings, null).
--
Patrick Mueller
http://muellerware.org
On Jun 5, 2007, at 11:53 AM, Nic James Ferrier wrote:
> For example, here's a bit of Python code I've written just recently:
>
> return {"abbr":
> {"@class": "user",
> "@title": strip_openid_url(openid_profile.openid),
> "div":
> [{"span":
> {"@class": "nickname",
> "span": openid_profile.nick_name }},
> {"ul":
> {"@class": "mugshots",
> "div":
> [{"li":
> [{"img":
> {"@class": "mugshot",
> "@alt": mugshot.name,
> "@src": file_field_get_url(mugshot.shot) }},
> {"img":
> {"@class": "avatar",
> "@alt": mugshot.name,
> "@src": "/sitemedia/%s" % (mugshot.thumb)}}]} for mugshot in
> openid_profile.mugshot_set.all()]}}]}}
>
> pretty obvious what that is doing.
>
It depends on what you consider "obvious," which is based on past
experiences. I had to expand the text above in an editor before I
could make sense of it (the lack of indentation didn't help, but I
assume that was a side-effect of submitting it).
Just for sake of comparison, here is the same as an XML document in
C#. Obviously, it's beauty lies in the eyes of the beholder. ;)
XDoc result = new XDoc("abbr").Attr("class", "user").Attr("title",
strip_openid_url(openid_profile.openid))
.Start("div")
.Start("span").Attr("class", "nickname")
.Start("span").Value(openid_profile.nick_name).End()
.End()
.End()
.Start("div")
.Start("ul").Attr("class", "mugshots");
foreach(mugshot mugshot in openid_profile.mugshot_set.all()) {
result.Start("div")
.Start("li")
.Start("img").Attr("class", "mugshot").Attr("alt",
mughshot.name).Attr("src", file_field_get_url(mugshot.shot)).End()
.End()
.Start("li")
.Start("img").Attr("class", "avatar").Attr("alt",
mughshot.name).Attr("src", string.Format("/sitemedia/{0}",
mugshot.thumb)).End()
.End()
.End();
return result.EndAll().ToJson();
- Steve
--------------
Steve G. Bjorg
http://www.mindtouch.com
http://www.opengarden.org
Chuck Hinson wrote: > > But I think you missed my point, and the point was that the > combined "edit" > > & "edit_form" seem to be part of process flow that REST > does not define. > > REST seems to define the retrieval of the data and the > update of the > > data, but not the interim process. > The 'hypermedia as the engine of application state' is the > part of REST that describes this. > The 'uniform interface' aspect talks about GET/PUT/DELETE of > a single resource, but it's the collection of resources that > make up the full application and REST describes how that > collection of resources is used. Again, the point is that when designing a REST system you don't need all the hacks that a non-javascript-enabled browser requires. The browser has almost zero knowledge of process or workflow, so the knowledge has to be embedded into representations or created as otherwise unneeded resources. In a custom REST app it can know that when the process wants to add an item it requests the hypermedia "index", looks for the URL of the "add" resource, builds a known content type to add, then POSTs to the add URL. For an edit, it requests the hypermedia "index", looks for both the URL of resource and the "update" URL for the resource, modifies a known content type, then PUTs to the update URL. In the browser the content type is generic and thus there is a need for both add form and an edit form resources. Accomodating those hacks to make the browser work with generic content types means creating "nouns" for the forms to accommodate them. Or at least that's how it seems to me. Again, this understanding doesn't invalidate REST at all, it just clarifies some of the difference between "pure" REST and REST for use in a browser. > > That said, can you explain how one would communicate to the > same URL > > on the server that one view should be public and another private? > > Seems to me it should be a different URL. > They would be different URLs, I wasn't suggesting otherwise. That was part of point I was trying to make, for that context multiple URLs are required, not one. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
> >And you have things like OPTIONS which is supposed to say what >operations are allowed on a particular resource. > Try this: curl -iX options http://ericjbowman.com/date Tells you which HTTP methods are allowed and tells you the languages the service is available in. You're seeing an output transformation as an option, I see it as merely another representation of my resource, not a remote function call -- the client is requesting to GET some data from the server -- and this request can easily be fulfilled by an intermediary because /date's output is cacheable. > >some systems won't/can't cache GETs which contain parameters. >But this is more of a flaw with those systems then any flaw in your >method. > But, crucially, the cache component of my server connector handles this just fine. By using XSLTC's input cache, once any date conversion is requested the algorithm doesn't need to run for that date again (provided the server stays up). Once an output transformation has been run, the results are cached in Resin's HTTP cache. The only system I can control would scale very, very well because of this, if need be. Instead of running the algorithm every time (which would happen in RPC) I've created a cacheable lookup table from the algorithm output. -Eric
Patrick Mueller wrote: > > > Jon Hanna wrote: > > > XML is good for data you will map to documents though (if you are a whiz > > at whatever language is doing the work and lousy at XSLT you will > > obviously have different thresholds than if you can write enough code to > > pump something into a transform but are a whiz at XSLT). JSON isn't too > > hot for languages other than javascript if you have XML parsers to hand > > by not JSON parser. > > > > I tend to find that if I'm writing javascript then I'm also producing > > XML documents or fragments out of what I'm receiving, so I tend to > > favour XML. However, JSON does indeed come into its own for anything > > where just having a javascript object is what you want. > > I'm not a fan of JSON because it's JavaScript; I'm a fan of JSON because > it's a data structure. There are plenty of JSON munching libraries > available for various languages; scroll to the bottom of: > http://json.org/ <http://json.org/> Sounds like XSD+SOAP. cheers Bill
On Jun 5, 2007, at 12:06 PM, Patrick Mueller wrote: > The best example of making life better is getting people out of the > document design business. I suppose I've suffered through too much > crappy XML in my life, that I've come to see XML document design as an > art more than a science. Data design is usually easier, or at least > more obvious. There is no difference b/w the two: both documents and data structures are designed to fit a problem space. You might prefer one approach over another, because you are more comfortable with it, but that's a personal matter and not a technical issue. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Steve Bjorg wrote:
>
> On Jun 5, 2007, at 12:06 PM, Patrick Mueller wrote:
> > The best example of making life better is getting people out of the
> > document design business. I suppose I've suffered through too much
> > crappy XML in my life, that I've come to see XML document design as an
> > art more than a science. Data design is usually easier, or at least
> > more obvious.
> There is no difference b/w the two: both documents and data
> structures are designed to fit a problem space. You might prefer one
> approach over another, because you are more comfortable with it, but
> that's a personal matter and not a technical issue.
compare / contrast:
<book>
<title>Read this book</title>
<quantity>1</quantity>
<author>Bob the Dog</author>
</book>
{
"title" : "Read this book",
"quantity" : 1,
"author" : [
"Bob the Dog"
]
}
Note the embedded data is more self-descriptive than the XML; I can tell
just from looking at the title value, that it's a string; that the
quantity is a number, and that the author is a list (multi-valued), with
one string element. None of which you could infer from the XML.
There is a difference.
--
Patrick Mueller
http://muellerware.org
Sorry, that was a misquote; shouldn't have been Chuck Hinson, should have been Mike Dierken. > -----Original Message----- > From: Mike Schinkel [mailto:mikeschinkel@...] > Sent: Tuesday, June 05, 2007 4:16 PM > To: 'Mike Dierken' > Cc: 'REST Discuss' > Subject: RE: [rest-discuss] RESTful CRUD Module? > > Chuck Hinson wrote: > > > But I think you missed my point, and the point was that the > > combined "edit" > > > & "edit_form" seem to be part of process flow that REST > > does not define. > > > REST seems to define the retrieval of the data and the > > update of the > > > data, but not the interim process. > > The 'hypermedia as the engine of application state' is the part of > > REST that describes this. > > The 'uniform interface' aspect talks about GET/PUT/DELETE > of a single > > resource, but it's the collection of resources that make up > the full > > application and REST describes how that collection of resources is > > used. > > Again, the point is that when designing a REST system you > don't need all the hacks that a non-javascript-enabled > browser requires. The browser has almost zero knowledge of > process or workflow, so the knowledge has to be embedded into > representations or created as otherwise unneeded resources. > > In a custom REST app it can know that when the process wants > to add an item it requests the hypermedia "index", looks for > the URL of the "add" resource, builds a known content type to > add, then POSTs to the add URL. For an edit, it requests the > hypermedia "index", looks for both the URL of resource and > the "update" URL for the resource, modifies a known content > type, then PUTs to the update URL. In the browser the content > type is generic and thus there is a need for both add form > and an edit form resources. Accomodating those hacks to make > the browser work with generic content types means creating > "nouns" for the forms to accommodate them. Or at least > that's how it seems to me. > > Again, this understanding doesn't invalidate REST at all, it > just clarifies some of the difference between "pure" REST and > REST for use in a browser. > > > > That said, can you explain how one would communicate to the > > same URL > > > on the server that one view should be public and another > private? > > > Seems to me it should be a different URL. > > They would be different URLs, I wasn't suggesting otherwise. > > That was part of point I was trying to make, for that context > multiple URLs are required, not one. > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org - http://t.oolicio.us > > >
This is a very interesting and important discussion by the way.
The problem with xml is that it is pure syntax. So there is no clear
way to interpret the data. People think that it is easy to interpret
because they are looking at it as humans and making all kinds of
assumptions without realising it. Problem is that when you code,
these assumptions may or may not be relevant, and then lots of nasty
difficult to work out bugs appear.[0]
Furthermore as people have mentioned on this list, trying to decide
how to fit an arbitrary data structure into the tree nature of xml is
a waste of time [1]. For those of you who know Java it is as if when
one serialized a bunch of objects one also had to decide for every
class which one's came first in the serialization, which ones second,
third, ... and then on top of it one has to make further arbitrary
decisions as to whether one has to use attributes or elements for a
relation or property. So the xml route all alone is unclear and
unnecessarily complicated.
Javascript and other OO languages were designed for data modeling, so
those issues are usually worked out for the language in question, and
of course for people using tools that use that language it is very
easy to parse.
The advantage of xml as a syntax is that it has namespaces which use
URIs. As a result it is a lot more web friendly. The global namespace
means that people can develop their vocabulary clearly without
stepping on each other's toes. This is something that is not
completely dealt with correctly in most OO languages [2]
Now one clean solution is rdf. This is the ultimate modeling
framework. The semantics are dead clear, and it is language
independent, amazingly enough! It is language independent because it
is all about semantics, how words relate to the world [3]
Since the words are URIs they are valid world wide. The syntax is
how to strings words together to form valid sentences. And there are
a number of ways of doing that: rdf/xml, ntriples, turtle, n3, ...
Since xml is a syntax that supports URI name spaces and rdf is a
semantics based on URIs it is clear to see why RDF/XML is such an
attractive proposition. (Though there are others such as TriX [4]
that have been proposed, and that are easier to parse for people
stuck with DOM tools. )
So javascript has clear semantics but lacks URL namespaces [2]. XML
has a well understood syntax but lacks semantics .
RDF/XML has a clear semantics and uses a widely used format. I think
there are more and more libraries now that allow one to simply map
rdf xml into any programming language. Libraries like so(m)mer for
Java [5], ActiveRDF for Ruby [6], Javascript [7], and many more [8]...
Now there are in fact more readable serialisations of rdf, which I do
recommend for human consumption. N3 being the most interesting of
them all. This is how one can then write out the json example
@prefix bk: <http://www.hackcraft.net/bookrdf/vocab/0_1/>
@prefix dc: <http://purl.org/dc/elements/1.1/>
[] a bk:Book;
dc:title "Read this Book";
xxx:quantity 1;
foaf:author <http://bobthedog.com/foaf#bob> .
Notice 2 things as soon as you use rdf to write this out:
- the book is missing a URI. Why not give it a URL?
- the xxx:quantity seems a bit odd no? How can a book have a
quantity. Should it not be another thing, say an order that
has a quantity?
These points appear immediately to anyone who works with N3 and rdf.
But in the xml there is no way to say if there is anything wrong or
how to interpret it.
Clearly this would be better
<http://amazon.com/books/order/12312312> a az:Order;
:contains ( <http://amazon.com/books/isbn/1231231> ) .
<http://amazon.com/books/isbn/1231231>
dc:title "Read this book";
foaf:author <http://bobthedog.com/foaf#bob> .
Since we are always dealing with URLs and since URLs name resources,
and resources return representations, we are always RESTful.
Henry
[0] In order to properly understand even a well written format such
as atom, you have to go and read the humanly defined atom ietf spec
that is written in english. And even people who read it regularly
miss some of the implications of what is going on.
[1] http://blogs.sun.com/bblfish/entry/rest_without_rdf_is_only
[2] http://blogs.sun.com/bblfish/entry/duck_typing_done_right
[3] see my illustration here:
http://blogs.sun.com/bblfish/resource/RDF-syntax-semantics.png
[4] http://www.mulberrytech.com/Extreme/Proceedings/html/2004/
Stickler01/EML2004Stickler01.html
[5] https://sommer.dev.java.net which used @rdf annotations
[6] http://www.activerdf.org/
[7] Javascript since Tim Berner's Lee's Tabulator
http://blogs.sun.com/bblfish/entry/
semantic_web_mashups_with_tabulator is written in JavaScript
but I don't know exactly here, is there anything as neat as
using annotations in so(m)mer?
[8] http://blogs.sun.com/bblfish/entry/250_semantic_web_tools
there are in fact 500 tools listed there now
[9] http://www.w3.org/2001/sw/DataAccess/json-sparql/
On 5 Jun 2007, at 17:02, Patrick Mueller wrote:
> Steve Bjorg wrote:
> >
> > On Jun 5, 2007, at 12:06 PM, Patrick Mueller wrote:
> > > The best example of making life better is getting people out of
> the
> > > document design business. I suppose I've suffered through too much
> > > crappy XML in my life, that I've come to see XML document
> design as an
> > > art more than a science. Data design is usually easier, or at
> least
> > > more obvious.
> > There is no difference b/w the two: both documents and data
> > structures are designed to fit a problem space. You might prefer one
> > approach over another, because you are more comfortable with it, but
> > that's a personal matter and not a technical issue.
>
> compare / contrast:
>
> <book>
> <title>Read this book</title>
> <quantity>1</quantity>
> <author>Bob the Dog</author>
> </book>
>
> {
> "title" : "Read this book",
> "quantity" : 1,
> "author" : [
> "Bob the Dog"
> ]
> }
>
> Note the embedded data is more self-descriptive than the XML; I can
> tell
> just from looking at the title value, that it's a string; that the
> quantity is a number, and that the author is a list (multi-valued),
> with
> one string element. None of which you could infer from the XML.
>
> There is a difference.
>
> --
> Patrick Mueller
> http://muellerware.org
>
Disagreeing for the discussion's sake ... On Jun 5, 2007, at 10:16 PM, Mike Schinkel wrote: > In a custom REST app it can know that when the process wants to add > an item > it requests the hypermedia "index", looks for the URL of the "add" > resource, > builds a known content type to add, then POSTs to the add URL. ... although usually, the "ADD URL" is the URL of the resource that returns the index on GET and accepts a new entity on POST ... > For an edit, > it requests the hypermedia "index", looks for both the URL of > resource and > the "update" URL for the resource, modifies a known content type, > then PUTs > to the update URL. ... no, it PUTs to the URL of the resource it wants to change ... > In the browser the content type is generic and thus there > is a need for both add form and an edit form resources. > Accomodating those > hacks to make the browser work with generic content types means > creating > "nouns" for the forms to accommodate them. Or at least that's how > it seems > to me. I agree that the uniform interface alone is not enough, but it defines a little more semantics than you make it seem. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On 06/06/07, Patrick Mueller <pmuellr@...> wrote:
> Steve Bjorg wrote:
> >
> > On Jun 5, 2007, at 12:06 PM, Patrick Mueller wrote:
> > > The best example of making life better is getting people out of the
> > > document design business. I suppose I've suffered through too much
> > > crappy XML in my life, that I've come to see XML document design as an
> > > art more than a science. Data design is usually easier, or at least
> > > more obvious.
> > There is no difference b/w the two: both documents and data
> > structures are designed to fit a problem space. You might prefer one
> > approach over another, because you are more comfortable with it, but
> > that's a personal matter and not a technical issue.
>
> compare / contrast:
>
> <book>
> <title>Read this book</title>
> <quantity>1</quantity>
> <author>Bob the Dog</author>
> </book>
>
> {
> "title" : "Read this book",
> "quantity" : 1,
> "author" : [
> "Bob the Dog"
> ]
> }
>
> Note the embedded data is more self-descriptive than the XML; I can tell
> just from looking at the title value, that it's a string; that the
> quantity is a number, and that the author is a list (multi-valued), with
> one string element. None of which you could infer from the XML.
As was said, you're putting your preferences in front of fact.
A schema aware XML could provide the data types, far more so than json.
As for self descriptive? Nothing to choose, it's personal preference.
regards
--
Dave Pawson
XSLT XSL-FO FAQ.
http://www.dpawson.co.uk
Stefan Tilkov wrote: > Disagreeing for the discussion's sake ... > > In a custom REST app it can know that when the process wants to add an > > item it requests the hypermedia "index", looks for the URL of the > > "add"resource, builds a known content type to add, then POSTs to the add URL. > ... although usually, the "ADD URL" is the URL of the > resource that returns the index on GET and accepts a new > entity on POST ... I wasn't meaning the entry point hypermedia "index" to necessarily be the same as the "index" you were referring to. I was speaking more of a general roadmap for all exposed services. > > For an edit, it requests the hypermedia "index", looks for both the URL > > of resource and the "update" URL for the resource, modifies a known > content type, then PUTs to the update URL. > > ... no, it PUTs to the URL of the resource it wants to change ... You are correct. I misspoke on the specific details (I was tired... :) Here is is, revised: For an edit, it requests the hypermedia "index", looks for both the URL of the resource, modifies a known content type, then PUTs to the URL. > > In the browser the content type is generic and thus there is a need > > for both add form and an edit form resources. Accomodating those > > hacks to make the browser work with generic content types means > > creating "nouns" for the forms to accommodate them. Or at > > least that's how it seems to me. > > I agree that the uniform interface alone is not enough, but > it defines a little more semantics than you make it seem. What defines a little more semantics? What is "it?" -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On Jun 6, 2007, at 10:05 AM, Mike Schinkel wrote: > Stefan Tilkov wrote: > > > > I agree that the uniform interface alone is not enough, but > > it defines a little more semantics than you make it seem. > > What defines a little more semantics? What is "it?" What I meant is: There are some things - some semantics - I can rely on when a system is build in a RESTful way, e.g. the safety of GET, the idempotence of PUT and DELETE, the concept that PUT affects the resource I send the request to, etc. I do not believe that the uniform REST interface, as 'implemented' in HTTP, is enough in the sense that I don't need any additional description (however informal); I suppose you agree. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On 5 Jun 2007, at 19:53, Nic James Ferrier wrote:
>
> return {"abbr":
> {"@class": "user",
> "@title": strip_openid_url(openid_profile.openid),
> "div":
> [{"span":
> {"@class": "nickname",
> "span": openid_profile.nick_name }},
> {"ul":
> {"@class": "mugshots",
> "div":
> [{"li":
> [{"img":
> {"@class": "mugshot",
> "@alt": mugshot.name,
> "@src": file_field_get_url(mugshot.shot) }},
> {"img":
> {"@class": "avatar",
> "@alt": mugshot.name,
> "@src": "/sitemedia/%s" % (mugshot.thumb)}}]} for mugshot in
> openid_profile.mugshot_set.all()]}}]}}
Since our friendly HTML mailer from Yahoo strips all indentation it
was actually really *difficult* for me to understand this code
without running it through Python and some pretty print..
_{'abbr': {'@class': 'user',
_ '@title': 'strip_openid_url(openid_profile.openid)',
_ 'div': [{'span': {'@class': 'nickname',
_ 'span': 'openid_profile.nick_name'}},
_ {'ul': {'@class': 'mugshots',
_ 'div': [{'li': [{'img': {'@alt':
'mugshot.name',
_ '@class': 'mugshot',
_ '@src':
'file_field_get_url(mugshot.shot)'}},
_ {'img': {'@alt':
'mugshot.name',
_ '@class': 'avatar',
_ '@src': '/
sitemedia/mugshot.thumb'}}]},
_ {'li': [{'img': {'@alt':
'mugshot.name',
_ '@class': 'mugshot',
_ '@src':
'file_field_get_url(mugshot.shot)'}},
_ {'img': {'@alt':
'mugshot.name',
_ '@class': 'avatar',
_ '@src': '/
sitemedia/mugshot.thumb'}}]},
_ {'li': [{'img': {'@alt':
'mugshot.name',
_ '@class': 'mugshot',
_ '@src':
'file_field_get_url(mugshot.shot)'}},
_ {'img': {'@alt':
'mugshot.name',
_ '@class': 'avatar',
_ '@src': '/
sitemedia/mugshot.thumb'}}]}]}}]}}
Also doing everything in once like this makes it really hard to
maintain, I would definitively split the code up into so-called
functions, but OK, let's to the monolith thing for argument's sake.
I don't see the big advantage of your massive thing compared to for
instance using ElementTree:
abbr = ET.Element("abbr")
abbr["class"] = "user"
abbr["title"] = strip_openid_url(openid_profile.openid))
div = ET.SubElement(abbr, "div")
span = ET.SubElement(div, "span")
span["class"] = "nickname"
span.text = openid_profile.nick_name
ul = ET.Subelement(div, "ul")
ul.set("class", "mugshots") # alternative attribute assignment
uldiv = ET.SubElement(ul, "div")
for mugshot in openid_profile.mugshot_set.all():
li = ET.Subelement(uldiv, "li")
mugshot = ET.SubElement(li, "img")
mugshot["class"] = "mugshot"
mugshot["src"] = file_field_get_url(mugshot.shot)
avatar = ET.SubElement(li, "img")
avatar["class"] = "avatar"
avatar["src"] = "/sitemedia/%s" % (mugshot.thumb)
Similarly, if using a proper hand-coded XSD (I wouldn't include all
those irrelevant div/span structures, though) and using XMLBeans in
Java it would be something like this: (We all know Java is more verbose)
AbbrDocument abbrDoc = AbbrDocument.Factory.newInstance()
Abbr abbr = abbrDoc.getAbbr();
abbr.setClass("user");
abbr.setTitle(strip_openid_url(openid_profile.openid));
Div div = abbr.newDiv();
Span span = div.newSpan();
span.setClass("nickname");
span.setText(openid_profile.nick_name);
Ul ul = div.newUl();
ul.setClass("mugshots");
Div div = ul.newDiv();
for (Mugshot mugshot : openid_profile.mugshot_set) {
Li li = div.newLi();
Img mugshot = li.newImg();
mugshot.setClass("mugshot");
mugshot.setSrc(file_field_get_url(mugshot.shot));
Img avatar = li.newImg();
avatar.setClass("avatar");
avatar.setSrc("/sitemedia/" + mugshot.thumb)) ;
}
I guess templates are really good when doing actual HTML, but I don't
see the big advantage for data structures. For instance here it would
make sense to make some methods or subclasses that takes care of
these silly setClass() calls, so you do a newAvatar() instead, etc.
Consider also the clients. There's no big difference in looking at
blah["abbr"]["div"]["span"]["span"] (which fails to check the class
attributes)
or
blah.get("div").get("span").get("span")
In fact with XML you can specify an (OK, not that easy) Xpath to get
the content of the src attribute of an tag which class is "mugshot"
and that is a child of an "ul" with class "mugshots". I'm not going
to attempt that now, that is left as an exercise to the author who
chose this div/span microformat in question.
One of the things that is useful with XML is that you can re-use an
existing schema, and clients might use whatever libraries they have
for that purpose already. The hottest example here is of course
XHTML, which you are using in some JSON-ish translation.
--
Stian Soiland, myGrid team
School of Computer Science
The University of Manchester
http://www.cs.man.ac.uk/~ssoiland/
On 6/4/07, Bill de hOra <bill@...> wrote: > > > Bob Haugen wrote: > > > > > > On 6/1/07, Alan Dean <alan.dean@... > > <mailto:alan.dean%40gmail.com>> wrote: > > > On 6/1/07, Bill de hOra <bill@... <mailto:bill%40dehora.net>> > > wrote: > > > > ... WWW is the wrong environment for ACID semantics. > > > > > > Agreed. > > > > > > Two-phase commit over the web is brittle & highly latent. This is why > > > WS-Transaction (AT) is a bad idea. > > > > > > http://en.wikipedia.org/wiki/WS-Transaction > > <http://en.wikipedia.org/wiki/WS-Transaction> > > > > > > "Long-running" (aka compensatory) transactions are the way to go. > > > > > > http://en.wikipedia.org/wiki/Long_running_transaction > > <http://en.wikipedia.org/wiki/Long_running_transaction> > > > > 2PC != ACID. > > > > I agree about ACID and WS-AT, but one of my previous transaction > > mentors claimed that some form of 2PC is unavoidable for coordination > > among independent agents. He was smarter than me, so I'll tentatively > > believe it. > > I've seen a paper that said Paxos was the same protocol as 2PC for some > number of actors; I don't know if that means 2PC is generally > unavoidable. The general theorem I know of is the 5 packet handshake. > For 5PH to work some cases need to be excluded to avoid "byzantine > generals" kinds of problems, and you might need to make some assumptions > (such as an "eventual arrival" if the channel is asymmetric). I liked it > because it applies directly to point to point messaging problems with an > unreliable channel (which is most reliable messaging scenarios). I have had a lot of fun looking at Paxos, 2PC and transactions. Paxos is a consensus protocol, get a bunch of processors to agree a value, however there are different solutions for different fault types and network conditions. A transaction is a case of uniform consensus - all processors must agree on the same value. Most cases of consensus only require the non-faulty processes to agree on a value. Thus, uniform consensus is considered more difficult than general consensus. That transactions are seen as a case of consensus was not seen until late in the literature. One of the most important results in consensus theory is the fact that consensus is impossible in an asynchronous system with only one faulty processor, even with a perfect network and where the processor fail-stops. Basically you cannot tell the difference between a processor that has failed and one that has stopped. This result is used in the CAP paper. 2PC is an asynchronous conensus algorithm, which is why it cannot handle a single fault. If the transaction manager fails at the wrong time the system blocks. Skeen showed that you need three phases to avoid blocking, unfortunately no one could come up with a nice 3PC algorithm until recently. Byzantine agreement (consensus replaced the term agreement in the literature) deals with the case were a processor can lie. This looks like a more challenging problem than the above, but does have a solution! This solution requires a synchronous system, messages are delivered in a fixed time and processors run at a known speed. Byzantine Agreement appeared in the literature before the Impossibility result. Paxos works in the region between asynchronous and synchronous, called partially synchronous. Consensus can be achieved when messages are delivered in a timely manner, but the system is safe otherwise. With 2n+1 processors, Paxos can reach consensus with n processors. There is also a Byzantine version of Paxos, which can handle fewer failures than conventional Paxos, in line with the Byzantine agreement result. During the 80's there was an attempt to bring conensus together with transaction theory, but the transaction guys rejected conensus because it was too expensive in terms of messages and processing. Jim Gray wrote a note on this. (This pre-dated Paxos). However recently Lamport and Gray produced Paxos Commit. They took that approach to replicate the TM in 2PC - that this obvious approach took so long to solve illustrates to me why distributed computing can be soo hard. For a single transaction in Paxos commit, there is an instance of Paxos for each RM (Paxos is not applied directly to the transaction problem, ie Paxos commit does not have to solve the uniform consensus problem). On the face of it this seems very expensive, but it is not. In fact in the fault free case Paxos Commit completes in two phases, a third phase is only needed if there is a fault (in line with Skeen's result). This means that Paxos has the same message delay as 2PC, ie it is as fast as 2PC, but requires more messages. With 2N+1 transaction managers, Paxos Commit can tolerate N failures. In the case where there is only one TM, Paxos Commit is the same protocol as 2PC, but without the fault tolerance! I did some work on using Paxos Commit and HTTP together to support distributed transactions, though the transactions did not have full ACID properties. http://www.allhands.org.uk/2006/proceedings/papers/624.pdf It is fun reading the literature, but made difficult by the fact that the results are not in the order you would expect them. cheers Mark Mc Keown > > > Compensation is extremely difficult in many (maybe most) cases. Can > > you really undo all of the effects of a set of distributed actions? > > You don't need to. Design an exception management system that kicks it > out to a person to decide how to resolve it. They can roll forward. > > > In REST, probably uses a separate transaction resource, or at least > > that's true of all the RESTful transaction proposals I've seen on this > > list or in the recent book. > > Yep. For an exchange of value it's useful if the state of the exchange > has separate identity to the thing of value. If there was a design > patterns book for the Web, this would be one of them. > > cheers > Bill
On 6 Jun 2007, at 01:02, Patrick Mueller wrote:
> compare / contrast:
>
> <book>
> <title>Read this book</title>
> <quantity>1</quantity>
> <author>Bob the Dog</author>
> </book>
>
> {
> "title" : "Read this book",
> "quantity" : 1,
> "author" : [
> "Bob the Dog"
> ]
> }
Add the information of (Sorry for not using RELAX NG..):
<?xml version="1.0" encoding="UTF-8"?>
<schema xmlns="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://example.com/NewXMLSchema" xmlns:tns="http://
taverna.sf.net/NewXMLSchema">
<element name="book">
<complexType>
<sequence>
<element name="title" type="string" minOccurs="1"></element>
<element name="quantity" type="nonNegativeInteger"></element>
<element name="author" type="string" maxOccurs="unbounded"
minOccurs="1"></element>
</sequence>
</complexType>
</element>
</schema>
and you're in goal.. you also then have enough information for
CREATING a book, you see that the title is required, quantity can't
be negative, and that although you can have more than one author, it
has to be at least one. (For the many-authors-hint you could also
just wrap with <authors> which I would consider cleaner when we are
talking about larger collections).
I don't see a big issue with getting the (basic) type of the data,
the coder still needs to know what "quantity" MEANS to do anything
useful with it.
Think also about future extension points, say that you later realise
it would be great to have a link for the author's resource, here it
would just mean adding the attribute xlink:href="/authors/bob" -
which shouldn't break existing clients. Change the JSON and all the
clients break, in this example you would have to introduce a new
dictionary instead of the string "Bob the Dog", just to add the
link. (or, probably safer, you could add a parallel key "author-uri"
or similar, which is OK had it not been that we are talking about
lists here.)
--
Stian Soiland, myGrid team
School of Computer Science
The University of Manchester
http://www.cs.man.ac.uk/~ssoiland/
On 4 Jun 2007, at 21:52, Peter Lacey wrote: > <ExpenseReport xml:base="http://example.com/expenses" > xmlns:xlink="http://www.w3.org/1999/xlink"> > <Submitter xlink:href=http://employees.example.com/placey>Peter > Lacey</Submitter> > <ChargeTo xlink:href=/departments/sales>Sales</ChargeTo> > <ExpenseItem> > <Date>2007-06-01</Date> > <Description>Airfare</Description> > <Amount>500.34</Amount> > </ExpenseItem> > </ExpenseReport> > . Ditto, I do not want the > client to tell me the URL of the sales department. He's not telling you the URL of the Sales department, he's telling you the URI of who to charge. (notice the I in URI :-) ) Imagine the client want's to change the charge to the Marketing department instead. Now the strings "Sales" and "Marketing" should probably not be considered unique, they are like "Peter Lacey" - there could be two with the same name (although a company with two Marketing department probably would have other issues to deal with..) So the client could PUT /users/placey/expenses/123 > <ExpenseReport xml:base="http://example.com/expenses" > xmlns:xlink="http://www.w3.org/1999/xlink"> > <Submitter xlink:href=http://employees.example.com/placey>Peter > Lacey</Submitter> > <ChargeTo xlink:href=/departments/marketing /> > <ExpenseItem> > <Date>2007-06-01</Date> > <Description>Airfare</Description> > <Amount>500.34</Amount> > </ExpenseItem> > </ExpenseReport> Notice how the content of ChargeTo is empty, this is the part that should be thrown away in my opinion, as you don't want the client's to be updating stuff at /departments/marketing when PUT-ing to /users/ placey/expenses/123. Similary, the name behind http:// employees.example.com/placey would not be updated, for many reason, the simplest being that to you wouldn't know what is the lastname and firstname anymore. I am in (minority, I guess) favour of partial PUTs (or should it be POSTs?), ie the client shouldn't HAVE to submit the whole representation back: PUT /users/placey/expenses/123 > <ExpenseReport xmlns:xlink="http://www.w3.org/1999/xlink"> > <ChargeTo xlink:href=/departments/marketing /> > <ExpenseItem> > <Description>Networking</Description> > </ExpenseItem> > </ExpenseReport> would change so that Marketing is charged, and the description is now much more subtle "Networking" (You met some old friend on the plane) Here there is no question about changing the Submitter, but as long as it would be the URI that is the important part, it should be slightly more robust than the value (Say you updated your name to "Peter S. Lacey" meanwhile) In my system I'm extending this so that several fields support arbitrary URIs, which means the client might not have to re-publish data for every service. (Security issues aside..) -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
On 6/6/07, Henry Story <henry.story@...> wrote: > ... > RDF/XML has a clear semantics and uses a widely used format. I think > there are more and more libraries now that allow one to simply map > rdf xml into any programming language. Libraries like so(m)mer for > Java [5], ActiveRDF for Ruby [6], Javascript [7], and many more [8]... > > Now there are in fact more readable serialisations of rdf, which I do > recommend for human consumption. N3 being the most interesting of > them all. For me, the logical representation to use is RDF for the self-descriptive capability. Whether you choose to use RDF/XML or N3 is more a matter of personal choice, but I suspect that RDF/XML has the edge currently in toolsets. I am working on an e-commerce use case over on simplewebservices.org and I am utilizing RDF, see http://simplewebservices.org/index.php?title=Shopping Please bear in mind that it is currently being drafted; so be tolerant about typos, etc. However, please feel free to provide any feedback. Regards Alan Dean http://thoughtpad.net/alan-dean
Stian Soiland wrote: > I am in (minority, I guess) favour of partial PUTs (or should it be > POSTs?), ie the client shouldn't HAVE to submit the whole > representation back: I share that minority position. Indeed, I find it impossible to see how we cannot have partial PUTs allowed. When we put we transfer *a* representation of the resource from client to server, just like when we GET we transfer *a* representation of the resource from server to client. Since a resource can have more than one representation, and we can only ever PUT one representation, any PUT is potentially affecting an innumerable number of representations as these may all depend on the server's knowledge of the resource - which we have just changed. All PUTs are therefore partial in this way. Following from that there is no reason why one may not send a representation that omits some information (it is indeed very common for one representation of a resource to contain information another does not). There is nothing faulty with such a representation and therefore no reason why it may not be used. Therefore whether partial PUTs may or may not be used becomes solely a matter of whether partial knowledge of a representation may be expressed in a particular content type. One can also do partial PUTs using content-range but this either requires either the entity to be of a type where over-writing a fixed number of octets makes sense, or else the use of a custom range-unit.
On 6/6/07, Jon Hanna <jon@...> wrote: > > Stian Soiland wrote: > > I am in (minority, I guess) favour of partial PUTs (or should it be > > POSTs?), ie the client shouldn't HAVE to submit the whole > > representation back: > > I share that minority position. Indeed, I find it impossible to see how > we cannot have partial PUTs allowed. > > When we put we transfer *a* representation of the resource from client > to server, just like when we GET we transfer *a* representation of the > resource from server to client. > > Since a resource can have more than one representation, and we can only > ever PUT one representation, any PUT is potentially affecting an > innumerable number of representations as these may all depend on the > server's knowledge of the resource - which we have just changed. > > All PUTs are therefore partial in this way. > > Following from that there is no reason why one may not send a > representation that omits some information (it is indeed very common for > one representation of a resource to contain information another does > not). There is nothing faulty with such a representation and therefore > no reason why it may not be used. > > Therefore whether partial PUTs may or may not be used becomes solely a > matter of whether partial knowledge of a representation may be expressed > in a particular content type. > > One can also do partial PUTs using content-range but this either > requires either the entity to be of a type where over-writing a fixed > number of octets makes sense, or else the use of a custom range-unit. Hmmm ... "PUT and The Art of HTTP" If there is a representation that completely describes the resource, then a PUT will be complete. An example might be a PUT of a text/html document. However, as Jon rightly points out, some representations are a subset of the resource, and so (by implication) even though the PUT is 'representation complete' it is not 'resource complete'. An example might be a PUT to a system that supports resource metadata with an alternate representation format. Regards, Alan Dean http://thoughtpad.net/alan-dean
Alan Dean wrote: > If there is a representation that completely describes the resource, > then a PUT will be complete. An example might be a PUT of a text/html > document. Even in this case, the text/html document wouldn't necessarily contain every piece of information, even if we just limit "all information" to all of the information that might be available through some representation (which there is no real need to do, which becomes a practical matter when we concern that a server may decide to disregard certain information in all PUTs).
"Alan Dean" <alan.dean@...> writes: > However, as Jon rightly points out, some representations are a subset > of the resource, and so (by implication) even though the PUT is > 'representation complete' it is not 'resource complete'. An example > might be a PUT to a system that supports resource metadata with an > alternate representation format. Or PUTing formdata which describes a resource in the same way as the POSTed formdata that created the resource. -- Nic Ferrier http://www.tapsellferrier.co.uk
On Jun 6, 2007, at 7:09 AM, Josh Sled wrote:
> "Dave Pawson" <dave.pawson@...> writes:
>> A schema aware XML could provide the data types, far more so than
>> json.
>> As for self descriptive? Nothing to choose, it's personal preference.
>
> But in the example given, using just the defined semantics of XML
> and JSON,
> there's 2 things specified in the JSON that aren't in the XML:
>
> - 'quantity' is an integer.
> - 'author' is a list.
I would argue that the type is defined by the observer, not the
emitter. For instance, if I had the following XML:
<book>
<author>John Doe</author>
<author>Jane Doe</author>
<quantity>1</quantity>
</book>
The observer might not be aware of the fact that there can be
multiple <author> elements. It will read just the first one with:
string author = doc["author"].AsText;
Similarly, the <quantity> might have been emitted as a string, but is
read as an integer by the observer with a default quantity of 0 if it
could find a matching element:
int quantity = doc["quantity"].AsInt ?? 0;
By pushing the typing to the observer, we further decouple
participants in a communication. Said differently: strings and
structure are good enough for all cases. XML gives you exactly that
(and a bit more like xpath, xquery, xslt, etc.).
>
> As for the larger point, I think JSON's great for moving structs
> around.
> Every programming language has the same basic data types: string, int,
> double, boolean, map, list. Having a way to simply (concisely,
> readably,
> &c.) exchange them is nifty. And, no, it's not limited to javascript
> ... I've a Java system emitting JSON-serialized datapoints,
> consumed by
> python into RRD databases, and (if interesting) handed to another
> simple
> HTML-and-javascript UI for formatting and display.
But JSON is more brittle to changes than XML. The simple example of
one or more authors demonstrates that. Otherwise to protect yourself
you would have to always do the following for all key-value pairs:
{ "author" : [ "#text" : "John Doe" ] }
If you don't do this on the encoding side, you'll push the problem
out to all the decoders, which is exactly the case in JSON.
That said, I think JSON rules in the browser. But webservices should
emit serialized PHP for PHP clients and microformat when embedding
data in HTML. So, I need one format to can be freely converted to
any of these and has good processing options. That's XML for me.
- Steve
--------------
Steve G. Bjorg
http://www.mindtouch.com
http://www.opengarden.org
On 6/6/07, Steve Bjorg <steveb@...> wrote: > > > > > > > On Jun 6, 2007, at 7:09 AM, Josh Sled wrote: > > > "Dave Pawson" <dave.pawson@...> writes: > >> A schema aware XML could provide the data types, far more so than > >> json. > >> As for self descriptive? Nothing to choose, it's personal preference. > > > > But in the example given, using just the defined semantics of XML > > and JSON, > > there's 2 things specified in the JSON that aren't in the XML: > > > > - 'quantity' is an integer. > > - 'author' is a list. > > I would argue that the type is defined by the observer, not the > emitter. For instance, if I had the following XML: > <book> > <author>John Doe</author> > <author>Jane Doe</author> > <quantity>1</quantity> > </book> > > The observer might not be aware of the fact that there can be > multiple <author> elements. It will read just the first one with: > string author = doc["author"].AsText; > Similarly, the <quantity> might have been emitted as a string, but is > read as an integer by the observer with a default quantity of 0 if it > could find a matching element: > int quantity = doc["quantity"].AsInt ?? 0; > > By pushing the typing to the observer, we further decouple > participants in a communication. Said differently: strings and > structure are good enough for all cases. XML gives you exactly that > (and a bit more like xpath, xquery, xslt, etc.). For me, an important point to remember is that knowing the primitive type is often not enough. For example: is a weight value that is represented by a double primitive a measurement in kilograms or imperial pounds? Think what happened to the Mars Climate Orbiter mixing up imperial and metric. I don't know how json handles this, but RDF can easily do so. As an example, see: http://simplewebservices.org/index.php?title=Shopping Regards, Alan Dean http://thoughtpad.net/alan-dean
Excellent example. The "data as protocol" reality shows up in the XPath check for the Shopping protocol. REST already has contracts: data exchange with hypermedia. John Heintz http://johnheintz.blogspot.com/2007/06/doh-rest-already-had-contracts.html On 6/6/07, Alan Dean <alan.dean@...> wrote: > On 6/6/07, Steve Bjorg <steveb@...> wrote: > > > > > > > > > > > > > > On Jun 6, 2007, at 7:09 AM, Josh Sled wrote: > > > > > "Dave Pawson" <dave.pawson@...> writes: > > >> A schema aware XML could provide the data types, far more so than > > >> json. > > >> As for self descriptive? Nothing to choose, it's personal preference. > > > > > > But in the example given, using just the defined semantics of XML > > > and JSON, > > > there's 2 things specified in the JSON that aren't in the XML: > > > > > > - 'quantity' is an integer. > > > - 'author' is a list. > > > > I would argue that the type is defined by the observer, not the > > emitter. For instance, if I had the following XML: > > <book> > > <author>John Doe</author> > > <author>Jane Doe</author> > > <quantity>1</quantity> > > </book> > > > > The observer might not be aware of the fact that there can be > > multiple <author> elements. It will read just the first one with: > > string author = doc["author"].AsText; > > Similarly, the <quantity> might have been emitted as a string, but is > > read as an integer by the observer with a default quantity of 0 if it > > could find a matching element: > > int quantity = doc["quantity"].AsInt ?? 0; > > > > By pushing the typing to the observer, we further decouple > > participants in a communication. Said differently: strings and > > structure are good enough for all cases. XML gives you exactly that > > (and a bit more like xpath, xquery, xslt, etc.). > > For me, an important point to remember is that knowing the primitive > type is often not enough. For example: is a weight value that is > represented by a double primitive a measurement in kilograms or > imperial pounds? Think what happened to the Mars Climate Orbiter > mixing up imperial and metric. > > I don't know how json handles this, but RDF can easily do so. As an > example, see: > > http://simplewebservices.org/index.php?title=Shopping > > Regards, > Alan Dean > http://thoughtpad.net/alan-dean > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
Stefan Tilkov wrote: > > What defines a little more semantics? What is "it?" > What I meant is: There are some things - some semantics - I > can rely on when a system is build in a RESTful way, e.g. the > safety of GET, the idempotence of PUT and DELETE, the concept > that PUT affects the resource I send the request to, etc. I > do not believe that the uniform REST interface, as > 'implemented' in HTTP, is enough in the sense that I don't > need any additional description (however informal); I suppose > you agree. Yes, I do agree. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On Jun 6, 2007, at 7:56 AM, Alan Dean wrote: > For me, an important point to remember is that knowing the primitive > type is often not enough. For example: is a weight value that is > represented by a double primitive a measurement in kilograms or > imperial pounds? Think what happened to the Mars Climate Orbiter > mixing up imperial and metric. > Easy problem to solve: stop using imperial units! What self- respecting engineer doesn't use metric units? :P (j/k) Semantizing data is an endless road. The next level is that units aren't enough, but you also need to know the time for the units (think currencies), then the location of the unit (conversion rates don't represent the true value in specific locations), and so on. We can reason about it in email, but we're just torturing engineers by building solutions on it. At some point, good enough is good enough and convention must rule over semantics, imo. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Of course convention rules. People, groups and standards bodies can mint new URLs to describe different parts of the conceptual space. As in Java or other OO programming languages by the way, but more flexible, since you can say that two things (eg. classes) are the same by saying that they are owl:sameAs each other for example. Different vocabularies will capture different levels of detail, and will be more or less useful for different applications. But the nice thing is that I, as a service provider, can choose the vocabulary I need to put my service together, I don't have to reinvent it. And if I choose one, the meaning will be clear (as clear as the vocabulary at least) to my users. So if you do this in XML you have at least the same problem. XML is pure syntax so you are not going to escape it. You will have to come up for words to describe what you mean. So you may want to choose and integrate different vocabularies from different places, but this is not well defined in XML, as it's pure syntax. So you then have to invent a semantics anyway. And since every xml group is using their own home brewed semantics, you are going to waste a lot of you and your companies time. It may not seem like this as you start off, because using XML is such a huge improvement over what came before... By the way, the example is really interesting. For those who want to read it in other formats I really urge you to download cwm [1]. For example what does the word Service mean in the example? I can find out by running cwm http://purl.org/dc/dcmitype/Service --base=http://purl.org/dc/ dcmitype/ | less That does not work on the example.com vocabularies of course. Let me look at that example in more detail... Henry [1] python script: http://www.w3.org/2000/10/swap/doc/cwm.html On 6 Jun 2007, at 10:59, Steve Bjorg wrote: > On Jun 6, 2007, at 7:56 AM, Alan Dean wrote: > > For me, an important point to remember is that knowing the primitive > > type is often not enough. For example: is a weight value that is > > represented by a double primitive a measurement in kilograms or > > imperial pounds? Think what happened to the Mars Climate Orbiter > > mixing up imperial and metric. > > > Easy problem to solve: stop using imperial units! What self- > respecting engineer doesn't use metric units? :P (j/k) > > Semantizing data is an endless road. The next level is that units > aren't enough, but you also need to know the time for the units > (think currencies), then the location of the unit (conversion rates > don't represent the true value in specific locations), and so on. We > can reason about it in email, but we're just torturing engineers by > building solutions on it. At some point, good enough is good enough > and convention must rule over semantics, imo. > > - Steve > > -------------- > Steve G. Bjorg > http://www.mindtouch.com > http://www.opengarden.org > > >
On 6/6/07, Henry Story <henry.story@...> wrote: > > That does not work on the example.com vocabularies of course. > The example.org vocabularies are there as placeholders right now as I am still writing the use case. I don't know of any currency or measurement vocabularies offhand (xsd only deals with primitives), so I will come back to those and do some research later. I am guessing that there must be some, but I don't know. If anyone has any references, I would be very appreciative :-) Regards, Alan Dean http://thoughtpad.net/alan-dean
2.3.3 Simplicity The primary means by which architectural styles induce simplicity is by applying the principle of separation of concerns to the allocation of functionality within components. If functionality can be allocated such that the individual components are substantially less complex, then they will be easier to understand and implement. Likewise, such separation eases the task of reasoning about the overall architecture. I have chosen to lump the qualities of complexity, understandability, and verifiability under the general property of simplicity, since they go hand-in-hand for a network-based system. Applying the principle of generality to architectural elements also improves simplicity, since it decreases variation within an architecture. Generality of connectors leads to middleware [22]. [emphasis added] He mentions the "principle of generality" several times in the thesis, and it clearly is a key constraint on issues like keeping the number of "verbs" to a minimum, but he never defines it. I googled the term and only came up with a definition from political theory. I'm trying to flesh out my own theory/principle of generality and its relationship to loose coupling, and I'd rather not reinvent the wheel. Thanks. -- Nick
Steve Bjorg wrote: > Easy problem to solve: stop using imperial units! What self- > respecting engineer doesn't use metric units? :P (j/k) If mail could be more RESTful people in the US could see a pro-imperial version of that mail :)
On 6/6/07, Jon Hanna <jon@...> wrote: > > Steve Bjorg wrote: > > Easy problem to solve: stop using imperial units! What self- > > respecting engineer doesn't use metric units? :P (j/k) > > If mail could be more RESTful people in the US could see a pro-imperial > version of that mail :) I assume that this exchange is tongue-in-cheek ;-) ... but on a serious note, there is a real (indeed regulatory) need to represent both in certain circumstances in the UK (and you can't delegate the conversion to the UA). Also, there are many companies here that are multi-currency: permitting settlement in either GBP or EUR, for example. The exchange rate issue is typically handled by applying a exchange weighting during the basket activity and performing a applying a rule at checkout to ensure that the currency has not appreciated / depreciated beyond certain thresholds. Alan
That looks like a really cool example, but I'm afraid that I don't understand enough of rdf to grasp everything that's going on there. Any chance of expanding the example to include line-by-line commentary on the rdf - what each line means and what it allows you to do? Barring that, how about a pointer to some good RDF articles that do more that just explain the theory; ones that maybe give some worked examples. (I've googled RDF tutorials,and there are lots out there - most of them aren't quite what I'm looking for.) --Chuck On 6/6/07, Alan Dean <alan.dean@...> wrote: > On 6/6/07, Steve Bjorg <steveb@...> wrote: > > > > > > > > > > > > > > On Jun 6, 2007, at 7:09 AM, Josh Sled wrote: > > > > > "Dave Pawson" <dave.pawson@...> writes: > > >> A schema aware XML could provide the data types, far more so than > > >> json. > > >> As for self descriptive? Nothing to choose, it's personal preference. > > > > > > But in the example given, using just the defined semantics of XML > > > and JSON, > > > there's 2 things specified in the JSON that aren't in the XML: > > > > > > - 'quantity' is an integer. > > > - 'author' is a list. > > > > I would argue that the type is defined by the observer, not the > > emitter. For instance, if I had the following XML: > > <book> > > <author>John Doe</author> > > <author>Jane Doe</author> > > <quantity>1</quantity> > > </book> > > > > The observer might not be aware of the fact that there can be > > multiple <author> elements. It will read just the first one with: > > string author = doc["author"].AsText; > > Similarly, the <quantity> might have been emitted as a string, but is > > read as an integer by the observer with a default quantity of 0 if it > > could find a matching element: > > int quantity = doc["quantity"].AsInt ?? 0; > > > > By pushing the typing to the observer, we further decouple > > participants in a communication. Said differently: strings and > > structure are good enough for all cases. XML gives you exactly that > > (and a bit more like xpath, xquery, xslt, etc.). > > For me, an important point to remember is that knowing the primitive > type is often not enough. For example: is a weight value that is > represented by a double primitive a measurement in kilograms or > imperial pounds? Think what happened to the Mars Climate Orbiter > mixing up imperial and metric. > > I don't know how json handles this, but RDF can easily do so. As an > example, see: > > http://simplewebservices.org/index.php?title=Shopping > > Regards, > Alan Dean > http://thoughtpad.net/alan-dean > > > > Yahoo! Groups Links > > > >
* eoinprout <eoin@...> [2007-06-05 14:15]: > It looks like RPC to me because it looks like your making a > function call rather then requesting the a resource. So what? All requests end up running code on the server. Arguing about whether a URI looks like a function call or not is string theory for the web. > RPC does not have to be stateful. That’s not even wrong. It’s like saying RESTful systems do not have to have more than a single resource. In neither case can you talk about an architectural style. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hello to everyone,
as we know, the key constraint of REST is
Hypermedia as the engine of application state
As most of us probably have noticed, that makes for one bulky
linguistic entity. It is, to be frank, a mouthful. Being,
however, that it is the most important aspect of REST, it also
comes up a lot whenever I try to explain REST or discourse about
it. Consequently, conducting such conversation is often awkward
and straining.
Even the corresponding initialism, HATEOAS, is less than easily
handled. Worse, the eye wants to read it as an acronym, and in
that capacity it starts with the uncomfortable sequence “hate”.
It almost reads like “hate oats”. Whether you read this is a
compound noun or a verb and noun – it does not for straight-
faced conversation make.
So here is my proposal: let’s shorten the awkward part of the
phrase, “as the engine of”, to a single word meaning the same
thing: “driving”. Now, any rephrasing is not going to put the
focus on “hypermedia” quite in the same way as Roy’s phrasing
manages to, which is the trade-off; in return, however, we get
a term that is much easier to incorporate into sentences:
Hypermedia-driven application state
If people start using this, I suspect that “application” will
frequently get shortened to “app”, which makes the phrase roll
off the tongue even easier. In fact, read it out aloud:
Hypermedia-driven app state
Notice that? It has a rhythm. Linguistically, this is important;
it makes the term feel lightweight and natural to say, even if it
has a lot of syllables. I also like the form with “application”
shortened because it keeps the weight of the term on the “hyper-
media-driven” part, restoring some of the precision of Roy’s
wording.
Finally, this abbreviates as “HDAS” – which works excellently.
Needless to say it suffers none of the awkwardness of “HATEOAS”.
But being four letters also puts it into the sweet spot for
initialisms that people use. And there appear to be only obscure
expansions for “HDAS”, so the REST meaning should quickly become
the dominant understanding of the abbreviation.
This, therefore, is my submission: let’s start referring to the
constraint as “hypermedia-driven app state”.
The constraint has not received enough attention despite being
the pillar upon which REST, uh, rests; popular understanding of
REST is nebulous at best as a consequence. We need to explain
this principle more and better, and it is my honest conviction
that the bulkiness of the established term is an impediment. We
as REST proponents, of all people, should know the importance of
naming things.
Respectfully yours,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
I am going to be writing a number of blog posts on this book, starting http://blogs.sun.com/bblfish/entry/ restful_web_services_the_book I am also reading it using Safari Books. Henry Home page: http://bblfish.net/ Sun Blog: http://blogs.sun.com/bblfish/ Foaf name: http://bblfish.net/people/henry/card#me On 24 May 2007, at 20:03, Zhang Yining wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > It's out on O'Reilly Safari two(?) days ago[1], so anyone with a > safari > account and start reading it online. > > [1] http://safari.oreilly.com/9780596529260 > > Mark W. Humphries wrote: > > > > > > I'm on the verge of ordering this book. Any considered opinions? > > > > Cheers, > > Mark Humphries > > Manila, Philippines > > > > - -- > Zhang Yining > URL: http://www.zhangyining.net | http://www.yining.org > mailto: yining@... | zhang.yining@... > Fingerprint: 25C8 47AE 30D5 4C0D A4BB 8CF2 3C2D 585F A905 F033 >
> I am going to be writing a number of blog posts on this book, > starting http://blogs.sun.com/bblfish/entry/restful_web_services_the_book Cool! Thanks for sharing. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
[ Attachment content not displayed ]
A. Pagaltzis wrote: > as we know, the key constraint of REST is > > Hypermedia as the engine of application state > > Being, however, that it is the most important aspect of > REST, it also comes up a lot whenever I try to explain > REST or discourse about it. Consequently, conducting > such conversation is often awkward and straining. Not to be pedantic, but did you really me to say *the* most important, key constraint?" At best I would think it is on peer with the other constraints, especially when compared with the uniform interface constrain. Personally, I think that "hypermedia as engine of application state" has been hijacked (even by Roy himself) to prematurely quash any exploration of URL construction. As such, I'm always sensitive when I hear people want to draw more attention to it. What I'm advocating here, and have been on and off as I've had time, is that we should be looking at using "hypermedia AND url construction" instead of having the false dichotomy of "hypermedia OR url construction." More specifically, I'm referring to the use of URI Templates for servers to convey their intentions to the client via hypermedia. Further, I think if URL construction with URI Templates were incorporated into REST best practices I think we'd see a lot less pushback from people on the hypermedia constraint. Of course we need to get URI templates to become a recommendation, but encouraging URI templates use with REST could given weight to seeing that recommendation finally happen. > So here is my proposal: let's shorten the awkward part of the > phrase, "as the engine of", to a single word meaning the same > thing: "driving". Now, any rephrasing is not going to put the > focus on "hypermedia" quite in the same way as Roy's phrasing > manages to, which is the trade-off; in return, however, we > get a term that is much easier to incorporate into sentences: > > Hypermedia-driven application state > > If people start using this, I suspect that "application" will > frequently get shortened to "app", which makes the phrase > roll off the tongue even easier. In fact, read it out aloud: > > Hypermedia-driven app state > > Notice that? It has a rhythm. Linguistically, this is > important; it makes the term feel lightweight and natural to > say, even if it has a lot of syllables. I also like the form > with "application" > shortened because it keeps the weight of the term on the > "hyper- media-driven" part, restoring some of the precision > of Roy's wording. Wow. Given's Roy's past comments regarding interpretations of his thesis, I ain't touchin that one with a ten foot pole! '-) > The constraint has not received enough attention despite > being the pillar upon which REST, uh, rests; popular > understanding of REST is nebulous at best as a consequence. One of the reasons I believe it hasn't received "enough attention" is because it's reference in Roy's thesis was fleeting (only one sentence?), and particulars of it's implementation has been nebulous. Here on the list when someone has asked for examples to demostrate the hypermedia constraint they have frequently been dismissively told that "any web page with links is an example." While that answer might make the answerers feel smug, it has not provided acceptable guidance for people trying to implement the constraint for use in RESTful services not targeting the browser. That said, codifying the constraints into concrete examples and implementations was my main reason for joining with Alan Dean to start the Simple Web Services initiative. My hopes was to create a collaborative structure to capture some best practices for use of REST's constraints from the REST thought leaders and forge them into concrete examples, implementations, and so on. Just FYI. BTW, why not just call it "the hypermedia constraint?" -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On 07/06/07, Henry Story <henry.story@...> wrote: > I am going to be writing a number of blog posts on this book, > starting http://blogs.sun.com/bblfish/entry/ > restful_web_services_the_book About the book, or selling RDF? regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
On 6/7/07, Chuck Hinson <chuck.hinson@...> wrote: > That looks like a really cool example, but I'm afraid that I don't > understand enough of rdf to grasp everything that's going on there. > > Any chance of expanding the example to include line-by-line commentary > on the rdf - what each line means and what it allows you to do? The example is currently only in version 1 draft. I will take your comments on board and add in more explanatory text over time. Regards, Alan Dean http://thoughtpad.net/alan-dean
On 6/7/07, John D. Heintz <jheintz@...> wrote: > > I don't know if this is what Roy ment, but this is how I internalized it. > > Generality is prefering generic/shared/common programming models instead of specific/unique/custom ones. +1 John is right. When Roy says "Generality of connectors leads to middleware" I believe that he means, for example, that I can install an HTTP Cache intermediary and "it just works" because it doesn't need to know anything about the applications on the network - just the HTTP protocol. Regards, Alan Dean http://thoughtpad.net/alan-dean
A. Pagaltzis wrote: > Hello to everyone, > > as we know, the key constraint of REST is > > Hypermedia as the engine of application state I'm not sure it's meaningful to talk about a single constraint as the "key" constraint. I think HatEoAS is appears particularly important right now because: 1. It's the most novel part of REST (almost every other constraint has been done a lot more in different systems). 2. It's the most visually "webby" part of REST, though this is a matter of perception as much as anything else. 3. It's a constraint that has been relatively neglected in much discussion of REST and the balance is being redressed by a recent re-focusing on it. However, constraints by their very nature either hold or don't. Even if we consider HatEoAS as the keystone, removal of any other block brings the whole arch tumbling down. Obviously CoD breaks that analogy, but it does so in ways that are themselves well defined in REST. > So here is my proposal: let’s shorten the awkward part of the > phrase, “as the engine of”, to a single word meaning the same > thing: “driving”. Now, any rephrasing is not going to put the > focus on “hypermedia” quite in the same way as Roy’s phrasing > manages to, which is the trade-off; in return, however, we get > a term that is much easier to incorporate into sentences: > > Hypermedia-driven application state This also gives us "Hypermedia-driven applications" to describe applications whose state is so driven, which is a phrase I have found myself using without actually attempting a coinage.
Mike Schinkel wrote: > Personally, I think that "hypermedia as engine of application state" has > been hijacked (even by Roy himself) to prematurely quash any exploration of > URL construction. As such, I'm always sensitive when I hear people want to > draw more attention to it. Nope, if the construction isn't coming from the media it isn't REST. I've sided with you on the wider argument about the importance of URI design (though I think most of it's advantages are irrelevant to REST and the exceptions only obliquely relevant), but the possibility of URI construction is a disadvantage to "nice" URIs, not and advantage. > What I'm advocating here, and have been on and off as I've had time, is > that we should be looking at using "hypermedia AND url construction" instead > of having the false dichotomy of "hypermedia OR url construction." More > specifically, I'm referring to the use of URI Templates for servers to > convey their intentions to the client via hypermedia. If the server communicates a URI template and how it should be used then it IS hypermedia. If the client "just knows" how to construct a URI then it isn't. Hypermedia is any medium from which a parser can derive a URI and its purpose (though it may rely upon passing a human-readable string, image or similar to a user to determine the purpose) from the document without any knowledge beyond the contents of that document, the URI that document represents (if relevant) and the general rules for parsing that document. A good test of this is whether it can deal with a change of URIs in the path, host, scheme and query string portion (moving information between each of these parts I'd consider a plus but not a vital necessity). URI templates in an entity received from the server is hence hypermedia. > Wow. Given's Roy's past comments regarding interpretations of his thesis, I > ain't touchin that one with a ten foot pole! '-) A more balanced approach seems to me to be to interpret it plenty, but listen to any objections he has on that interpretation. It's no use to us if we don't interpret it to some extent. The problem is only if we get this wrong. Personally since I'm not so hubristic as to think I'm always right I wouldn't be upset if Roy criticised something I said about his thesis, rather I'd be glad of an opportunity to learn from the master. > One of the reasons I believe it hasn't received "enough attention" is > because it's reference in Roy's thesis was fleeting (only one sentence?), > and particulars of it's implementation has been nebulous. Agreed. That concision isn't a flaw though, but it does mean that there is a place for more discussion. > Here on the list when someone has asked for examples to demostrate the > hypermedia constraint they have frequently been dismissively told that "any > web page with links is an example." While that answer might make the > answerers feel smug, it has not provided acceptable guidance for people > trying to implement the constraint for use in RESTful services not targeting > the browser. Actually, it's a pretty good example in a lot of ways. The one real disservice I think we do in that example is not saying "with links or forms". That it doesn't directly relate to what people are thinking about when they think about media types aimed at other uses (particularly web services) is a point rather than an excuse to feel smug - web pages work and are presumably getting something right. Ignoring the case of webservices here is a bit like a kōan that can bring enlightenment. I don't feel smug when I refer to web pages as an example of hypermedia, I think "why did it take me so long to get this myself?" If people don't get it from that example, I think they may need *lots* of examples. Adding RSS, ATOM, SVG, Google Sitemaps, RDF documents with seeAlso and so on may still not be enough. Really hypermedia is almost too simple that some people (focusing on other concerns, especially if their background makes RPC or other non-RESTful solutions seem more obvious) can't change gears. Reminds me of the IRC conversation that lead to one of the participants designing http://www.cafepress.com/rest.2592837
Jon Hanna wrote: > Hypermedia is any medium from which a parser can derive a URI and its > purpose Rect: Of course non-www hypermedia could have different ways to identify other resources/content/whatever construct makes sense in that system. *Web Hypermedia* is any...
> >Treat the URI as opaque. Verbs in a URI might be a bad smell, a very bad >smell, but there's nothing RESTful or unRESTful about them. What matters is >which verb counts: the method verb (in the case of HTTP, GET, PUT, &c.) or the >one in the identifier. A URI is just an identifier for a resource, and as long >as clients are able to treat it as such, there's no problem with having >anything it it. > It only smells bad if you insist "transform" is a verb. I see it as synonymous with "channel" -- which is a noun. It makes sense to me as the consumer of a service which has a variety of output transformations for any given resource. Since the query doesn't change, only the representation, and since it's optional I see this as a valid use for URI parameters. Ugly or not. ;-) -Eric
On 6/7/07, John D. Heintz <jheintz@...> wrote: > > Hi Nick, > > I don't know if this is what Roy ment, but this is how I internalized it. > > Generality is prefering generic/shared/common programming models instead of specific/unique/custom ones. +1, well said. That's how I always interpreted it. Mark.
> This, therefore, is my submission: let’s start referring to the > constraint as “hypermedia-driven app state”. Not bad, but the new RWS book calls it "connectedness." That's even easier. - Pete
--- In rest-discuss@yahoogroups.com, "Mark Baker" <distobj@...> wrote:
>
> On 6/7/07, John D. Heintz jheintz@... wrote:
> >
> > Hi Nick,
> >
> > I don't know if this is what Roy ment, but this is how I
internalized it.
> >
> > Generality is prefering generic/shared/common programming models
instead of specific/unique/custom ones.
>
> +1, well said. That's how I always interpreted it.
>
> Mark.
>
+1 as well. But let me clarify my original question. I am not seeking to
understand what the "principle of generality" as Roy uses it might mean;
I have my own interpretation of what I think it means and it is very
much in line with these excellent comments. Rather, I am seeking to find
out where Roy got the principle in the first place.
He uses the "principle of generality" and "generality principle" three
times in the thesis:
1.4
Hence, the architectural constraint is “uniform component
interface,” motivated by
the generality principle, in order to obtain two desirable qualities
that will become the
architectural properties of reusable and configurable components when
that style is
instantiated within an architecture.
2.3.3
Applying the principle of generality to architectural elements also
improves
simplicity, since it decreases variation within an architecture.
Generality of connectors
leads to middleware [22].
5.1.5
By applying the software engineering principle of generality to the
component interface, the
overall system architecture is simplified and the visibility of
interactions is improved.
His use of the term (especially the 3rd use) seems to suggest he is
merely citing an existing principle that he learned about from some
source. What I am looking for is the source of this cite. Sorry I wasn't
clearer originally.
Stop the Press! In trying to provide more context for this question, I
think I answered it myself. In looking at how Roy used the word
"principle" in the thesis, I found the following quote (thank god for
Acrobat's search function, which acts as a dynamic concordance):
1.4
Properties are induced by the set of constraints within an architecture.
Constraints are
often motivated by the application of a software engineering principle
[58] to an aspect of
the architectural elements. For example, the uniform pipe-and-filter
style obtains the
qualities of reusability of components and configurability of the
application by applying
generality to its component interfaces " constraining the
components to a single interface
type. Hence, the architectural constraint is “uniform component
interface,” motivated by
the generality principle, in order to obtain two desirable qualities
that will become the
architectural properties of reusable and configurable components when
that style is
instantiated within an architecture.
[58] is a cite to:
C. Ghezzi, M. Jazayeri, and D. Mandrioli. Fundamentals of Software
Engineering <http://www.infosys.tuwien.ac.at/se-book/> .
Prentice-Hall, 1991.
I googled the title and hit paydirt: slides for teaching with the book
<http://www.infosys.tuwien.ac.at/se-book/slides/> . And indeed, Chapter
3 <http://www.infosys.tuwien.ac.at/se-book/slides/Ch3.ppt> deals with
the following key "Software Engineering Principles":
* Rigor and formality
* Separation of concerns
* Modularity
* Abstraction
* Anticipation of change
* Generality
* Incrementality
In case you are interested (and don't want to download the ppt) here is
the slide on Generality:
* While solving a problem, try to discover if it is an instance of a
more general problem whose solution can be reused in other cases
* Carefully balance generality against performance and cost
* Sometimes a general problem is easier to solve than a special case
What I like about this description of the principle is that it
highlights both the benefits (reuse and ease of solution), as well as
the costs (performance and cost).
I'd only add one other benefit regarding generality (or extend the reuse
benefit): serendipity. For an upcoming presentation on WOA I created the
following slide:
SOA: Specific-Operation Architecture vs. Serendipity-Oriented
Architecture
* Unexpected reuse is the value of the web
* Tim Berners-Lee
* Two of the goals of REST: independent evolvability and
design-for-serendipity
* Roy T. Fielding
* Engineer for serendipity
* Roy T. Fielding
(The "Specific-Operation Architecture" is a thinly veiled knock on
typical WS-*-based approach to SOA.)
The Internet and the Web are paradigms of Serendipity-Oriented
Architectures. Why? Largely because of their simple generality. It is my
belief that generality is one of the major enablers of serendipity. So
here I immodestly offer Gall's General Principle of Serendipity: "Just
as generality of knowledge is the key to serendipitous discovery,
generality of purpose is the key to serendipitous (re)use."
-- Nick
--- In rest-discuss@yahoogroups.com, Peter Lacey <placey@...> wrote: > > > This, therefore, is my submission: let’s start referring to the > > constraint as “hypermedia-driven app state”. > > Not bad, but the new RWS book calls it "connectedness." That's even easier. > > - Pete > Only problem is that like REST, you can't google specifically for "connectedness" in the RESTful sense. (REST and connectedness in a google search gets 782K hits. I've been using HEAS (Hypermedia as the Engine of Application State) with some success. Easier to pronounce than HDAS. But HEAS is already an acronym for some stuff (gets 344K google hits). BTW, HDAS gets 74K google hits. It would be nice if we are going to use an acronym, it be somewhat unique. But I agree that the "hate" in HATEOAS is a non-starter. How about HMEAS (HyperMedia as the Engine of Application State). REST+HMEAS currently only gets 36 hits! I'm going to start using HMEAS! -- Nick
A. Pagaltzis wrote: > as we know, the key constraint of REST is > > Hypermedia as the engine of application state > > As most of us probably have noticed, that makes for one bulky > linguistic entity. It is, to be frank, a mouthful. Being, > however, that it is the most important aspect of REST, it also > comes up a lot whenever I try to explain REST or discourse about > it. Consequently, conducting such conversation is often awkward > and straining. Agree. Being a Chinese, I have the similar feeling for the Chinese translation of the phrase. For the Chinese and those curious, my translation is: “超媒体作为应用状态的引擎” > So here is my proposal: let’s shorten the awkward part of the > phrase, “as the engine of”, to a single word meaning the same > thing: “driving”. Now, any rephrasing is not going to put the > focus on “hypermedia” quite in the same way as Roy’s phrasing > manages to, which is the trade-off; in return, however, we get > a term that is much easier to incorporate into sentences: > > Hypermedia-driven application state Yes, the direct translation will be: 超媒体驱动的应用状态, which I find easier to understand, at least to preach :-) So, yes, +1. > If people start using this, I suspect that “application” will > frequently get shortened to “app”, which makes the phrase roll > off the tongue even easier. In fact, read it out aloud: > > Hypermedia-driven app state no such short form in Chinese though. > This, therefore, is my submission: let’s start referring to the > constraint as “hypermedia-driven app state”. Has the consensus been reached yet? I believe only so will it not only be good for the spread and adoption of the term but for the REST too. Thank you. > Respectfully yours, > -- > Aristotle Pagaltzis // <http://plasmasturm. org/ <http://plasmasturm.org/>> -- Zhang Yining URL: http://www.zhangyining.net | http://www.yining.org mailto: yining@... | zhang.yining@... Fingerprint: 25C8 47AE 30D5 4C0D A4BB 8CF2 3C2D 585F A905 F033
Patrick Mueller wrote: > quantity is a number, and that the author is a list (multi-valued), with > one string element. None of which you could infer from the XML. > > There is a difference. It's the difference between structured versus nominal types. I don't think you can say one form is better than the other without saying what the problem context is. For now, it seems that JSON works out by virtue of targeting a controlled environment, the browser. Perhaps it does a better job of defining the right subset of nominal types for programmers working in languages like Python/Javascript than XSD did (in its attempt to define a superset of types for all-comers). If JSON turns out not have the problems that SOAP+XSD did in the same places that SOAP+XSD has had problems, that might say something about XML. Until then, I'll reserve my judgment. Generally on "self-descriptive". The amount of information carried by a format is in large part a feature of what evaluates the format. Powerful evaluators would be things like Lisp interpreters and truth maintenance engines. After watching people around XML, then RDF, then REST repeatedly confuse each other, I find it helps not use "self-descriptive" as a term at all. Another way of looking at it - if you're arguing up media-types as providing "self-description", you should really be jumping into RDF and web interlingua; they do so much more and thus will provide even more "self-description". Likewise, if you're arguing up JSON as providing "self-description" you should be jumping into javascript and/or Lisp as they provide even more "self-description". cheers Bill
Nick Gall wrote: > I am /not/ seeking > to understand what the "principle of generality" as Roy uses it might > /mean/; I have my own interpretation of what I think it means and it is > very much in line with these excellent comments. Rather, I am seeking to > find out where Roy got the principle in the first place. Why don't you ask him? cheers Bill
At the moment, I am casting my vote with using RDF/XML as the 'fundamental type' for my shopping use case: http://simplewebservices.org/index.php?title=Shopping this does not preclude obtaining alternate representation formats, such as json, by content-negotiation (which I will be elaborating on soon in the use case) but I am choosing RDF/XML as the fundammental type. Regards, Alan Dean http://thoughtpad.net/alan-dean On 6/8/07, Bill de hOra <bill@...> wrote: > > Patrick Mueller wrote: > > quantity is a number, and that the author is a list (multi-valued), with > > one string element. None of which you could infer from the XML. > > > > There is a difference. > > It's the difference between structured versus nominal types. I don't > think you can say one form is better than the other without saying what > the problem context is. > > For now, it seems that JSON works out by virtue of targeting a > controlled environment, the browser. Perhaps it does a better job of > defining the right subset of nominal types for programmers working in > languages like Python/Javascript than XSD did (in its attempt to define > a superset of types for all-comers). If JSON turns out not have the > problems that SOAP+XSD did in the same places that SOAP+XSD has had > problems, that might say something about XML. Until then, I'll reserve > my judgment. > > Generally on "self-descriptive". The amount of information carried by a > format is in large part a feature of what evaluates the format. > Powerful evaluators would be things like Lisp interpreters and truth > maintenance engines. After watching people around XML, then RDF, then > REST repeatedly confuse each other, I find it helps not use > "self-descriptive" as a term at all. > > Another way of looking at it - if you're arguing up media-types as > providing "self-description", you should really be jumping into RDF and > web interlingua; they do so much more and thus will provide even more > "self-description". Likewise, if you're arguing up JSON as providing > "self-description" you should be jumping into javascript and/or Lisp as > they provide even more "self-description". > > cheers > Bill > >
On Jun 8, 2007, at 2:48 PM, Bill de hOra wrote: > Nick Gall wrote: > > > I am /not/ seeking > > to understand what the "principle of generality" as Roy uses it > might > > /mean/; I have my own interpretation of what I think it means and > it is > > very much in line with these excellent comments. Rather, I am > seeking to > > find out where Roy got the principle in the first place. > > Why don't you ask him? It would not have worked. I am currently working on the principle of vacation. Or should that be the constraint of vacation? Congrats on finding the correct answer. ....Roy
Hmm, apparently hypermedia can also work as the engine of vacation state? ;-) wm ----- Original Message ----- From: "Roy T. Fielding" <fielding@...> To: "Bill de hOra" <bill@...> Cc: "Nick Gall" <nick.gall@...>; "REST Discuss" <rest-discuss@yahoogroups.com> Sent: Saturday, June 09, 2007 12:29 AM Subject: Re: [rest-discuss] Re: What specifically is the "principle of generality" that Roy's thesis mention : On Jun 8, 2007, at 2:48 PM, Bill de hOra wrote: : > Nick Gall wrote: : > : > > I am /not/ seeking : > > to understand what the "principle of generality" as Roy uses it : > might : > > /mean/; I have my own interpretation of what I think it means and : > it is : > > very much in line with these excellent comments. Rather, I am : > seeking to : > > find out where Roy got the principle in the first place. : > : > Why don't you ask him? : : It would not have worked. I am currently working on the principle : of vacation. Or should that be the constraint of vacation? : : Congrats on finding the correct answer. : : ....Roy : : : : : : __________ NOD32 2320 (20070609) Information __________ : : This message was checked by NOD32 antivirus system. : http://www.eset.com :
Mark Mc Keown wrote: > [...] > It is fun reading the literature, but made difficult by the fact that > the results > are not in the order you would expect them. Mark, Wow. If you have a weblog, please cut and past that into it. It explained things very clearly (I can never seem to find anything on this list easily after it's been said) cheers Bill
And what's nice about that is, if you need to firm things up later (this is a phone number, that's a IEEE754 float), you (or a recipient) can annotate the RDF later. x has_a y y my:typesystem: iee754:float For the non RDF people here, this means that: x has_a y is still workable data. Kind of like like optional type declarations in programming languages. cheers Bill Alan Dean wrote: > At the moment, I am casting my vote with using RDF/XML as the > 'fundamental type' for my shopping use case: > > http://simplewebservices.org/index.php?title=Shopping > > this does not preclude obtaining alternate representation formats, > such as json, by content-negotiation (which I will be elaborating on > soon in the use case) but I am choosing RDF/XML as the fundammental > type. > > Regards, > Alan Dean > http://thoughtpad.net/alan-dean > > On 6/8/07, Bill de hOra <bill@...> wrote: >> >> Patrick Mueller wrote: >> > quantity is a number, and that the author is a list (multi-valued), >> with >> > one string element. None of which you could infer from the XML. >> > >> > There is a difference. >> >> It's the difference between structured versus nominal types. I don't >> think you can say one form is better than the other without saying what >> the problem context is. >> >> For now, it seems that JSON works out by virtue of targeting a >> controlled environment, the browser. Perhaps it does a better job of >> defining the right subset of nominal types for programmers working in >> languages like Python/Javascript than XSD did (in its attempt to define >> a superset of types for all-comers). If JSON turns out not have the >> problems that SOAP+XSD did in the same places that SOAP+XSD has had >> problems, that might say something about XML. Until then, I'll reserve >> my judgment. >> >> Generally on "self-descriptive". The amount of information carried by a >> format is in large part a feature of what evaluates the format. >> Powerful evaluators would be things like Lisp interpreters and truth >> maintenance engines. After watching people around XML, then RDF, then >> REST repeatedly confuse each other, I find it helps not use >> "self-descriptive" as a term at all. >> >> Another way of looking at it - if you're arguing up media-types as >> providing "self-description", you should really be jumping into RDF and >> web interlingua; they do so much more and thus will provide even more >> "self-description". Likewise, if you're arguing up JSON as providing >> "self-description" you should be jumping into javascript and/or Lisp as >> they provide even more "self-description". >> >> cheers >> Bill >> >>
Delivery of content over http can be optimized with some simple yet powerful techniques including cache-control headers, etags and compression. I talk about them in more detail here: http://abstractfinal.blogspot.com/2007/06/http-content-optimization.html I am interested in knowing / understanding more such techniques that you have come across or employed in your projects. Thanks, Keyur
Patrick Mueller wrote: > I'm thinking if you've decided on mapping your data to XML in the first > place, you've made a wrong turn. XML is a poor serialization format, > because it has little direct mapping to 'data'. It's great for > documents, but as programmmers, we deal with data, we design data (or > try). Expecting programmers to design well thought out documents is too > much to ask, IMHO. Sounds like you live in a very tiny box, and in which you never seen an non-record oriented data. In my world documents contain data and data includes documents. The distinction is fuzzy to nonexistent. My programs and my fellow programmers and I deal with this all the time. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Yes, please, post this somewhere we can find it and share it. (Do you have suggested references to these sources?) Thanks, John Heintz On 6/9/07, Bill de hOra <bill@...> wrote: > Mark Mc Keown wrote: > > [...] > > It is fun reading the literature, but made difficult by the fact that > > the results > > are not in the order you would expect them. > > Mark, > > Wow. If you have a weblog, please cut and past that into it. It > explained things very clearly (I can never seem to find anything on this > list easily after it's been said) > > cheers > Bill > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
Keyur Shah wrote: > Delivery of content over http can be optimized with some simple yet > powerful techniques including cache-control headers, etags and > compression. I talk about them in more detail here: > http://abstractfinal.blogspot.com/2007/06/http-content-optimization.html Near the top of something I wrote on the same topic recently I put "Caching is not an optimisation". Optimisation is finding the problem spots and tweaking. Web the self-descriptive qualities of web messages automatically allowing caches to do their jobs. It's not an optimisation, its doing it right. Also, the issues with content codings you mention in IE (not the only place it has issues) don't exist in transfer codings. Not as widely supported, but safer for that reason, and more transparent.
On 6/9/07, Keyur Shah <keyurva@...> wrote: > > Delivery of content over http can be optimized with some simple yet > powerful techniques including cache-control headers, etags and > compression. I talk about them in more detail here: > http://abstractfinal.blogspot.com/2007/06/http-content-optimization.html > > I am interested in knowing / understanding more such techniques that > you have come across or employed in your projects. You missed: Last-Modifed (entity header) Expires (entity header) If-Match (request header) If-Modified-Since (request header) If-Unmodified-Since (request header) For an activity diagram of the server resolution of these and other headers, see http://thoughtpad.net/alan-dean/http-headers-status.html My links on HTTP cache: http://del.icio.us/alan.dean/http%2Bcache Regards, Alan Dean http://thoughtpad.net/alan-dean
Jon Hanna wrote: > Mike Schinkel wrote: > > Personally, I think that "hypermedia as engine of > > application state" has been hijacked (even by Roy himself) > > to prematurely quash any exploration of URL construction. > > As such, I'm always sensitive when I hear people want to > > draw more attention to it. > > Nope, if the construction isn't coming from the media it isn't REST. Thanks, you've just artfully illustrated my point! Say "URL Construction" and the RESTians stick their fingers in their ears and scream "I dont hear you" while humming very loudly... ;-) IOW, the phrase "URL construction" it is a trigger that causes most REST advocates to immediately become defensive rather than to be willing to explore how to achieve both benefits of hypermedia *and* URL construction. It's kinda similar to the feelings that the phrase "amnesty" evokes in certain people here in the US right now. ;-) > > What I'm advocating here, and have been on and off as I've > > had time, is that we should be looking at using "hypermedia > > AND url construction" instead of having the false dichotomy > > of "hypermedia OR url construction." More specifically, I'm > > referring to the use of URI Templates for servers to convey > > their intentions to the client via hypermedia. > > If the server communicates a URI template and how it should > be used then it IS hypermedia. Your acknowledgement here is begrudging rather than revelational. It's my belief we need to stop being afraid of URI construction as a phrase and instead look for how to achieve most or all of its benefits without causing harm to the REST architecture. History has shown that any time there are short term benefits to a path that is perceived to be harmful long term that people will choose "the harmful path" over and over. Rather than take a priestly position of "though shalt not" it's better to find an alternate solution that provides the benefits of both paths and relieves people desire to take the harmful path. And no Jon, I wasn't using "priestly position" to refer to you specifically. :) BTW, one reason people dont do hypermedia is that most HTTP components that do a GET just do a GET, they don't follow redirects and they certainly don't parse tag soup documents to find links to follow. It is usually an order of magnitude harder in most languages to program a system using hypermedia than it is just to construct the URL and issue a GET. If RESTians really want to promote the use of the hypermedia constraint they need to catalyze the creation of tools that make it brain-dead easy program hypermedia in a generic sense. > A good test of this is whether it can deal with a change of > URIs in the path, host, scheme and query string portion > (moving information between each of these parts I'd consider > a plus but not a vital necessity). Heh. Any and every REST system will fail that test. After all, how do you change the entry point URL? '-) > > One of the reasons I believe it hasn't received "enough > > attention" is because it's reference in Roy's thesis was > > fleeting (only one sentence?), and particulars of it's > > implementation has been nebulous. > > Agreed. That concision isn't a flaw though, but it does mean > that there is a place for more discussion. Uh, that's my point. In the past I have been told "just read his thesis, it's all there." Heh. > > Here on the list when someone has asked for examples to > > demostrate the hypermedia constraint they have frequently > > been dismissively told that "any web page with links is an > > example." While that answer might make the answerers > > feel smug, it has not provided acceptable guidance for > > people trying to implement the constraint for use in > > RESTful services not targeting the browser. > > Actually, it's a pretty good example in a lot of ways. A good example for those who already understand it, that is. > The one real disservice I think we do in that example is > not saying "with links or forms". There you go again... '-) > That it doesn't directly relate to what people are > thinking about when they think about media types aimed at > other uses (particularly web services) That's one falicy RESTians often commit; assuming that everyone that is trying to understand REST is steeped in SOAP or RPC. Many web developers (myself included) had never used SOAP or RPC so for us this is unnecessary complication. > is a point rather than an excuse to feel smug - web pages > work and are presumably getting something right. Ignoring > the case of webservices here is a bit like a kan that > can bring enlightenment. I don't feel smug when I refer > to web pages as an example of hypermedia, I think "why > did it take me so long to get this myself? The point is that just saying "any web page with links and forms is an example" gives only about 20% of the guidance needed to properly implement a RESTful system. What's more, must of the open web is not fully RESTful, browser-based sites are often architectured differently than REST-based web services because of the lack of PUT and DELETE support in forms and because of the differing needs a human has vs. a client agent. If the answer "any web page with links and forms is an example" gave all the guidance needed there would be no substantive discussion of specifics on this list, and there are plenty of those. But the asker of the question needs to first know the questions to ask, which often do not follow from "any web page with links and forms is an example." Rather than go round and round on this, will you at least agree that most people need more guidance than just being told: "any web page with links and forms is an example?" > If people don't get it from that example, I think they may > need *lots* of examples. Adding RSS, ATOM, SVG, Google > Sitemaps, RDF documents with see. Yes, and what is wrong with that? Alternately, what's wrong with actually explaining the concepts and constraints with examples? I'll credit Joe Gregorio with being one of the few that does a good job with that. > Also and so on may still not > be enough. That's a specious argument. > Really hypermedia is almost too simple that some people > (focusing on other concerns, especially if their background > makes RPC or other non-RESTful solutions seem more obvious) > can't change gears. Maybe the problem for many that can't change gears is that the debates are too often framed on an abstract "good" vs. "bad" turning it into a religious debate rather than providing examples that illustrate the benefits of REST and that speak for themselves? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Mike Schinkel wrote: >> If the server communicates a URI template and how it should >> be used then it IS hypermedia. > > Your acknowledgement here is begrudging rather than revelational. It's neither begrudging, nor an acknowledgement. If you choose to claim that sort of hypermedia also counts as URI construction, then grand, but it alters nothing about the arguments about using hypermedia vs. not using hypermedia.
On 6/9/07, Mike Schinkel <mikeschinkel@...> wrote: > ... lots ... > Jon Hanna wrote: > ... lots as well ... Question: Surely URL construction is orthogonal to REST? As an agnostic on this issue - I wonder if more heat than light is being cast here. For me, I don't find myself in a position where I can make a decision if I think it helps or hinders 'hypermedia yada yada' because I have seen no functioning examples that I am aware of, nor any analagous implementations. Perhaps Mike can assist the discussion by providing some links for us to enlighten ourselves? In my shopping use case, I am using HTTP-in-RDF to describe hypermedia links in the shopping basket that can be discovered and followed by a UA to traverse the application state model. In the use case I am treating the URI as a 'blob' (for lack of a better phrase). I would be interested to know if construction by the UA would be better - and if so, then how? http://simplewebservices.org/index.php?title=Shopping The reason that I pose this particular question is that my use case exists and can be commented on in a concrete way rather than having heatedly abstract discussions. Regards, Alan Dean http://thoughtpad.net/alan-dean
Report of this meeting is up online: http://www.w3.org/2007/04/wsec_report Given the presentations we saw, its a bit of a disappointing writeup; there are definately some statements about the value of SOAP1.2, WSDL2.0 and, in particular, WS-A that surprised me, and which went alongside some misunderstandings of REST "The original goal of SOAP has much in common with REST and in fact certain interpretations of the specifications cite the major difference being SOAP allows the definition of a method or operation name within the message and REST does not. This is largely due to the fact that REST is specific to HTTP (i.e. HTTP has an action header that can be used for the same purpose, or an associated URI) whereas Web services are multi-protocol and therefore need all information to be contained within the envelope. Some of the fundamental differences result from these different assumptions: REST assumes HTTP while WS-* assumes protocol neutrality. Others result from the fact that HTTP intentionally designs out features of existing IT systems (e.g. session based security, transaction coordination, reliability, etc.) and that the WS-* specifications basically amount to an attempt to put them back in." funniest quotes "the major goal of WS-* is interoperability and if you don't need interoperability you don't really need WS-*." And, after much praise of WS-Addressing and its rapid standardisation (==the reality that there are 3-4 different versions out there, and WS-DM 1.0 depends on two different versions), finally a note that "there was consensus surrounding EPRs was that vendors should show care in using EPRs" so, lots of interesting papers, but this summary is pretty disappointing. I wonder if this was the actual outcome of the workshop, or merely the opinons of those who volunteered to write it up. As it is, it is more a "there are some problems with WS-*, but we can fix them" rather than some discussion on how best to use REST as an architecture for behind-the-firewall systems. -steve
Jon Hanna wrote: > Mike Schinkel wrote: > > If the server communicates a URI template and how it > > should be used then it IS hypermedia. > > > > Your acknowledgement here is begrudging rather than revelational. > > It's neither begrudging, nor an acknowledgement. I was afraid you'd pick on that wording. I spent 15 minutes on thesaurus.com trying to find the right words, but rather than waste further time I just went with those. > If you choose to claim that sort of hypermedia also counts > as URI construction > then grand, but it alters nothing about the > arguments about using hypermedia vs. not using > hypermedia. I didn't say it did nor would I want to. What I want however is to bring awareness to how the hypermedia constraint can accommodate URL construction. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Alan Dean wrote: > Question: Surely URL construction is orthogonal to REST? Yes, but the term "URL construction" illicits immediate condemnation from RESTians, and I'm trying to break that. > For me, I don't find myself in a position where I can make a > decision if I think it helps or hinders 'hypermedia yada > yada' because I have seen no functioning examples that I am > aware of, nor any analagous implementations. > > Perhaps Mike can assist the discussion by providing some > links for us to enlighten ourselves? Follow my future work on Simple Web Services.... :) > In my shopping use case, I am using HTTP-in-RDF to describe > hypermedia links in the shopping basket that can be > discovered and followed by a UA to traverse the application > state model. > > In the use case I am treating the URI as a 'blob' (for lack > of a better phrase). I would be interested to know if > construction by the UA would be better - and if so, then how? > > http://simplewebservices.org/index.php?title=Shopping > > The reason that I pose this particular question is that my > use case exists and can be commented on in a concrete way > rather than having heatedly abstract discussions. As I said above... -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Alan Dean wrote: > On 6/9/07, Mike Schinkel <mikeschinkel@...> wrote: >> ... lots ... >> Jon Hanna wrote: >> ... lots as well ... > > Question: Surely URL construction is orthogonal to REST? Depends on just what you mean. If by "URI construction" you mean "URIs are built according to some sort of sensible design" (I don't think either of us do) then it's largely orthogonal to REST but personally I think still a good thing. If you're talking about types of hypertext that don't just give you a straight URI link but a way of constructing it (like templates or forms) then it fits so much within REST as to be largely an orthogonal matter - REST needs hypertext but not necessarily any particular form. If you're talking about about clients "just knowing" how to construct a URI, which is what I would normally take "URI construction" to mean, then you're no longer orthogonal to REST, you're counter to it.
Steve Loughran wrote: > "The original goal of SOAP has much in common with REST and in fact > certain interpretations of the specifications cite the major > difference being SOAP allows the definition of a method or operation > name within the message and REST does not. This is largely due to the > fact that REST is specific to HTTP (i.e. HTTP has an action header > that can be used for the same purpose, or an associated URI) whereas > Web services are multi-protocol and therefore need all information to > be contained within the envelope. Some of the fundamental differences > result from these different assumptions: REST assumes HTTP while WS-* > assumes protocol neutrality. Others result from the fact that HTTP > intentionally designs out features of existing IT systems (e.g. > session based security, transaction coordination, reliability, etc.) > and that the WS-* specifications basically amount to an attempt to put > them back in." If anyone here had said the above was something that WS-* proponents might think, I'd have accused them of adopting low tactics in creating such an OTT strawman to argue against.
Alan Dean > At the moment, I am casting my vote with using RDF/XML as the > 'fundamental type' for my shopping use case: > > http://simplewebservices.org/index.php?title=Shopping > > this does not preclude obtaining alternate representation > formats, such as json, by content-negotiation (which I will > be elaborating on soon in the use case) but I am choosing > RDF/XML as the fundammental type. And I am casting my vote in opposition to RDF. I don't have it on the wiki yet, but I just detailed my thoughts in an email to the simple web services list here [1]. (Please take any discussion to that list.) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On 6/9/07, Jon Hanna <jon@...> wrote: > > Alan Dean wrote: > > On 6/9/07, Mike Schinkel <mikeschinkel@...> wrote: > >> ... lots ... > >> Jon Hanna wrote: > >> ... lots as well ... > > > > Question: Surely URL construction is orthogonal to REST? > > Depends on just what you mean. > > If by "URI construction" you mean "URIs are built according to some sort > of sensible design" (I don't think either of us do) then it's largely > orthogonal to REST but personally I think still a good thing. As has been discussed plenty of times on this list: may be regarded as good practice, but is (at the end of the day) an aesthetic choice. REST as an architectural style does not require URIs to have a formalised structure, at least not as I understand it. > If you're talking about types of hypertext that don't just give you a > straight URI link but a way of constructing it (like templates or forms) > then it fits so much within REST as to be largely an orthogonal matter - > REST needs hypertext but not necessarily any particular form. From what I understood of what Mike was saying - I think that is what he meant. > If you're talking about about clients "just knowing" how to construct a > URI, which is what I would normally take "URI construction" to mean, > then you're no longer orthogonal to REST, you're counter to it. Totally agree. Alan
Alan Dean wrote: > > > Question: Surely URL construction is orthogonal to REST? > > > > Depends on just what you mean. > > > > If by "URI construction" you mean "URIs are built according to some > > sort of sensible design" (I don't think either of us do) then it's > > largely orthogonal to REST but personally I think still a > good thing. > > As has been discussed plenty of times on this list: may be > regarded as good practice, but is (at the end of the day) an > aesthetic choice. > REST as an architectural style does not require URIs to have > a formalised structure, at least not as I understand it. Agreed. As an analogy, a car doesn't have to be asthetically pleasing to perform its job of providing transportation, but there are lots of intangible benefits for it to be asthetically pleasing. > > If you're talking about types of hypertext that don't just > give you a > > straight URI link but a way of constructing it (like templates or > > forms) then it fits so much within REST as to be largely an > orthogonal > > matter - REST needs hypertext but not necessarily any > particular form. > > From what I understood of what Mike was saying - I think that > is what he meant. Exactly. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
I think you are both right here, it is just a matter of finding the limits of each statement. It is true that documents contains data, and that there are a lot of documents out there with good data in them, more and more of them marked up with xml. On the other hand if you just want to publish the structured data that is already in your database, and there is a lot of such data, then trying to fit that data into some document format, is more work than is necessary, and is wasted effort, since it end up being completely arbitrary how you end up structuring the data. Of course this has nothing to do with xml, since rdf has an xml serialisation. Well... XML is a Markup Language. It is meant to markup text. As soon as you publish data you are going somewhat against the purpose of XML. Henry --- In rest-discuss@yahoogroups.com, Elliotte Harold <elharo@...> wrote: > > Patrick Mueller wrote: > > > I'm thinking if you've decided on mapping your data to XML in the first > > place, you've made a wrong turn. XML is a poor serialization format, > > because it has little direct mapping to 'data'. It's great for > > documents, but as programmmers, we deal with data, we design data (or > > try). Expecting programmers to design well thought out documents is too > > much to ask, IMHO. > > Sounds like you live in a very tiny box, and in which you never seen an > non-record oriented data. In my world documents contain data and data > includes documents. The distinction is fuzzy to nonexistent. My programs > and my fellow programmers and I deal with this all the time. > > > -- > Elliotte Rusty Harold elharo@... > Java I/O 2nd Edition Just Published! > http://www.cafeaulait.org/books/javaio2/ > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ >
Mike Schinkel wrote: >> From what I understood of what Mike was saying - I think that >> is what he meant. Then we've just been using terms differently. I don't call that URI construction, I call it hypermedia.
Jon Hanna wrote: > Mike Schinkel wrote: > >> From what I understood of what Mike was saying - I think > that is what > >> he meant. > > Then we've just been using terms differently. I don't call > that URI construction, I call it hypermedia. Ah, different meanings for the same terms. The crux of more debates, disagreements, and wars than anything besides differing values or battles for scarce resources. I make the point so strongly about URL construction because it means different things to different people, and w/o a shared understanding productive communication is stifled. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On Jun 9, 2007, at 4:13 PM, Mike Schinkel wrote: > Jon Hanna wrote: > > Mike Schinkel wrote: > > >> From what I understood of what Mike was saying - I think > > that is what > > >> he meant. > > > > Then we've just been using terms differently. I don't call > > that URI construction, I call it hypermedia. > > Ah, different meanings for the same terms. The crux of more debates, > disagreements, and wars than anything besides differing values or > battles > for scarce resources. > > I make the point so strongly about URL construction because it means > different things to different people, and w/o a shared understanding > productive communication is stifled. > I must admit that the term 'uri construction' is misleading. Maybe you want to come up with an alternative. Said differently, how would you call generating uris without following a template? - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
On 6/10/07, Steve Bjorg <steveb@...> wrote: > > > > > > > On Jun 9, 2007, at 4:13 PM, Mike Schinkel wrote: > > > Jon Hanna wrote: > > > Mike Schinkel wrote: > > > >> From what I understood of what Mike was saying - I think > > > that is what > > > >> he meant. > > > > > > Then we've just been using terms differently. I don't call > > > that URI construction, I call it hypermedia. > > > > Ah, different meanings for the same terms. The crux of more debates, > > disagreements, and wars than anything besides differing values or > > battles > > for scarce resources. > > > > I make the point so strongly about URL construction because it means > > different things to different people, and w/o a shared understanding > > productive communication is stifled. > > > I must admit that the term 'uri construction' is misleading. Maybe > you want to come up with an alternative. Said differently, how would > you call generating uris without following a template? Without trying to sound facetious ... "URI generation"? ... and when done by the client "User Agent URI generation"? Alan
Alan Dean wrote: > Without trying to sound facetious ... "URI generation"? > > ... and when done by the client "User Agent URI generation"? And with overtones of "The Who" singing "Mu, mu, mu, mu, my generation..." in the background... '-) -Mike
Steve Bjorg wrote: > I must admit that the term 'uri construction' is misleading. > Maybe you want to come up with an alternative. Said differently, how > would you call generating uris without following a template? You mean with a template? Hmm. Maybe you are right. My point was language is important, yet I wasn't using it. Maybe: URL assembly? URL composition? URL formation? URL manufacture? I like the first two... -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On Jun 9, 2007, at 7:03 PM, Mike Schinkel wrote:
> Steve Bjorg wrote:
> > I must admit that the term 'uri construction' is misleading.
> > Maybe you want to come up with an alternative. Said differently, how
> > would you call generating uris without following a template?
>
> You mean with a template? Hmm. Maybe you are right. My point was
> language
> is important, yet I wasn't using it. Maybe:
>
> URL assembly?
> URL composition?
> URL formation?
> URL manufacture?
>
> I like the first two...
>
> --
> -Mike Schinkel
> http://www.mikeschinkel.com/blogs/
> http://www.welldesignedurls.org
> http://atlanta-web.org - http://t.oolicio.us
>
Let's do it by examples:
1) going from 'http://server.com/addressbook/{name}' to 'http://
server.com/addressbook/johndoe'
2) going from 'http://server.com/addressbook' to 'http://server.com/
addressbook/johndoe'
#1 follows a template to create a new uri
#2 assumes implicit knowledge to create new uri
I believe everybody (including myself) like #1, b/c the template can
change at anytime and the client can create the correct new uris by
following a new template. The template is assumed to be obtained by
message exchange, of course, and not somethign hard-coded on the
client (otherwise, it would be the same as #2). #2, on the other
hand, is loathed universally.
For #1, I like 'uri composition' since it's the composition of a
template and data. Said differently, but with the same intent,
composition of partial knowledge to achieve full knowledge.
For #2, I consider 'uri creation' more appropriate. It reminds me of
creationism: from nothing to something through divine intervention.
These are just some thoughts to guide discussion around the terms for
how uris originate. The distinction is obviously key! ;)
- Steve
--------------
Steve G. Bjorg
http://www.mindtouch.com
http://www.opengarden.org
Steve Bjorg wrote:
> > > I must admit that the term 'uri construction' is misleading.
> > > Maybe you want to come up with an alternative. Said
> > > differently, how would you call generating uris without
> > > following a template?
> >
> > You mean with a template? Hmm. Maybe you are right. My point was
> > language is important, yet I wasn't using it. Maybe:
> >
> > URL assembly?
> > URL composition?
> > URL formation?
> > URL manufacture?
> >
> Let's do it by examples:
>
> 1) going from 'http://server.com/addressbook/{name}' to
> 'http:// server.com/addressbook/johndoe'
> 2) going from 'http://server.com/addressbook' to
> 'http://server.com/ addressbook/johndoe'
>
> #1 follows a template to create a new uri
> #2 assumes implicit knowledge to create new uri
>
> I believe everybody (including myself) like #1, b/c the
> template can change at anytime and the client can create the
> correct new uris by following a new template. The template
> is assumed to be obtained by message exchange, of course, and
> not somethign hard-coded on the client (otherwise, it would
> be the same as #2). #2, on the other hand, is loathed universally.
>
> For #1, I like 'uri composition' since it's the composition
> of a template and data. Said differently, but with the same
> intent, composition of partial knowledge to achieve full knowledge.
>
> For #2, I consider 'uri creation' more appropriate. It reminds me of
> creationism: from nothing to something through divine intervention.
Okay, awesome!
So here are two proposals for the adoption of terminology to clarify this
issue moving forward.
#1:
A.) URL Composition - The process of using hypermedia and templates
for URLs to construct URLs for resources.
B.) URL Creation - The process of constructing URLs based upon
observation, pattern recognition, guessing, or other non-hypermedia based
process.
C.) URL Construction - The unspecified process of assembling URLs
either by URL composition or URL creation.
#2, an alternate, which maintains the meaning for URL Construction that many
people assumed could be:
A.) URL Composition - The process of using hypermedia and templates
for URLs to assemble URLs for resources.
B.) URL Construction - The process of assembling URLs based upon
observation, pattern recognition, guessing, or other non-hypermedia based
process.
C.) URL Assembly - The unspecified process of determining URLs
either by URL composition or URL creation.
Are either of these acceptable, and if so, which would be preferred? And if
not, why not and do you have an alternate proposal?
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
P.S. Note that above I use the name "URL" instead of "URI" but URI could be
used above in place of URL. I know that most people on the W3C and related
lists prefer the term URI even though I strongly prefer the term URL because
it implies that an identifier must be locatable and not just an identifier
as used in context of XML namespaces. I know that TimBL prefers URL, which
is I assume why the W3C prefers it, but I have had discussions with Dan
Connolly who has admitted he preferred URLs for the same reason as I but
chose not to fight that batter. That is why you see me always use the term
URL instead of URI. As an aside, I don't see much value in URNs (some, but
not much) for essentially the same reason.
P.P.S. For the record, I agree that implementing "A" is the best practice
for anything to be processed by a user agent but believe that optimizing for
"B" is actually a best practice when the "client" is actually a human.
On 10/06/07, Mike Schinkel <mikeschinkel@...> wrote: > > >> From what I understood of what Mike was saying - I think > > that is what > > >> he meant. > > > > Then we've just been using terms differently. I don't call > > that URI construction, I call it hypermedia. > > Ah, different meanings for the same terms. The crux of more debates, > disagreements, and wars than anything besides differing values or battles > for scarce resources. And a fairly central theme to this list? Since there are no definitive terms, no reference source (that all can understand), it will remain 'in the land of the high priestess' until sorted or REST is driven into a cult corner. So many people talk past each other, list readers must give up and go find something simpler. I don't think the subtleties and overloading of terms does any good to REST ideas. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
Dave Pawson wrote: > > > From what I understood of what Mike was saying - I > > > think that is what he meant. > > > > > > Then we've just been using terms differently. I don't > > > that URI construction, I call it hypermedia. > > > > Ah, different meanings for the same terms. The crux of > > more debates, disagreements, and wars than anything > > besides differing values or battles for scarce resources. > > And a fairly central theme to this list? Only this list? '-) > Since there are no definitive terms, no reference source > (that all can understand), it will remain 'in the land of the > high priestess' until sorted or REST is driven into a cult corner. > > So many people talk past each other, list readers must give > up and go find something simpler. > > I don't think the subtleties and overloading of terms does > any good to REST ideas. Try as I might, I'm not sure what point you were trying to make. As an owner of one of your books, I respect your input, but I really don't know what you were trying to say. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On 10/06/07, Mike Schinkel <mikeschinkel@...> wrote: > > Since there are no definitive terms, no reference source > > (that all can understand), it will remain 'in the land of the > > high priestess' until sorted or REST is driven into a cult corner. > > > > So many people talk past each other, list readers must give > > up and go find something simpler. > > > > I don't think the subtleties and overloading of terms does > > any good to REST ideas. > > Try as I might, I'm not sure what point you were trying to make. As an owner > of one of your books, I respect your input, but I really don't know what you > were trying to say. That REST as an idea is so full of unclear terms that is not commonly understood. The terminology in this thread is (IMHO) a good example of that. A result I can see is that REST suffers from the confusion and will do so until some effort is put into clarification and subsequent agreed documentation. Then when some term (from Roys dissertation or not) comes up, a reference can be made to the documentation, rather than restart a permathread on the list again. Or agreement reached and the term added. The recent O'Reilly book is a good start. I don't think it's enough. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
Henry Story wrote: > On the other hand if you just want to publish the structured data that > is already in your database, and there is a lot of such data, then > trying to fit that data into some document format, is more work than > is necessary, and is wasted effort, since it end up being completely > arbitrary how you end up structuring the data. But it's not a lot more work. For XML a couple of extra tags is all you really need. Relational/SQL databases are a very powerful tool for data management, and we can do a lot with them. In fact, they are so powerful and so useful, that people who spend their lives with them tend to forget that there are many data management tasks we can't do with them and even more things we shouldn't do with them. I am reminded of the situation with numerical analysis in physics for the last 50 years. Computers and numerical algorithms have been so incredibly good at solving so many previously unsolvable problems that two generations of physicists have done little but write programs. Indeed at the extreme some physicists believe that the universe is nothing more than one big computer program, just as at the extreme some database practitioners believe that all problems can be reduced to tables. However neither belief is true. There are solvable problems in physics that cannot be solved by numerical analysis, and there are solvable problems in data management that cannot be solved by SQL. We must be careful not to confuse our tools with the problem space. We must be even more careful not to limit the problems we attempt to solve to the problems that are amenable to our tool of choice. And we must be especially careful not to do so when designing new tools for new problems. JSON is an instance of such myopia. JSON is designed to represent serialized JavaScript objects, nothing more. It does not work well when extended beyond that domain. Perhaps the XML world is at fault for this. On the one hand, the XML community developed hideous, baroque APIs like DOM that nobody could love. On the other hand, they never successfully explained to most developers that XML could do more than serialize objects and database tables. So when developers switched to JSON, they never noticed what they were losing in doing so. :-( -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 6/10/07, Mike Schinkel <mikeschinkel@...> wrote: [. . .] > > So here are two proposals for the adoption of terminology to clarify this > issue moving forward. > > #1: > > A.) URL Composition - The process of using hypermedia and templates > for URLs to construct URLs for resources. Can you expand on this? Are there two different ways to do composition - one with hypermedia and one with templates, or is it one way that requires the combination of two components? > B.) URL Creation - The process of constructing URLs based upon > observation, pattern recognition, guessing, or other non-hypermedia based > process. > C.) URL Construction - The unspecified process of assembling URLs > either by URL composition or URL creation. > What about URL selection - the process of selecting the URL to be used from a provided list? --Chuck
On Jun 10, 2007, at 7:59 AM, Chuck Hinson wrote:
> On 6/10/07, Mike Schinkel <mikeschinkel@...> wrote:
> [. . .]
>>
>> So here are two proposals for the adoption of terminology to
>> clarify this
>> issue moving forward.
>>
>> #1:
>>
>> A.) URL Composition - The process of using hypermedia and
>> templates
>> for URLs to construct URLs for resources.
>
> Can you expand on this? Are there two different ways to do
> composition - one with hypermedia and one with templates, or is it one
> way that requires the combination of two components?
There are "kind of" two ways. The first one is standard, the second
one is an extension of the first using uri templates (which are still
in draft).
1) In the first case, we use a simple form:
<form action="http://server.com/quote" method="get">
<input type="text" name="product" value="hairspray" />
<input type="text" name="quantity" value="10" />
</form>
This form describes how to compose uris which look like this:
http://server.com/quote?product=hairspray&quantity=10
2) In the second case, we use a uri templates to describe a richer
set of uris that can be built:
<form action="http://server.com/quote/{product}?quantity={quantity}"
method="get">
<input text="name" name="product" value="hairspray" />
<input type="text" name="quantity" value="10" />
</form>
This form describes how to compose uris which look like this:
http://server.com/quote/hairspray?quantity=10
>
>> B.) URL Creation - The process of constructing URLs based
>> upon
>> observation, pattern recognition, guessing, or other non-
>> hypermedia based
>> process.
>> C.) URL Construction - The unspecified process of
>> assembling URLs
>> either by URL composition or URL creation.
>>
>
> What about URL selection - the process of selecting the URL to be used
> from a provided list?
Uri selection falls is the degenerate case of uri composition, where
the template already contains all the necessary information. For
simplicity, we can just refer to these as "uris" and not require
further explanation.
Just my 2c.
- Steve
--------------
Steve G. Bjorg
http://www.mindtouch.com
http://www.opengarden.org
Hi, in HTTP, is there a way to be sure that a representation received upon a GET is definitely NOT coming from any (possibly malfunctioning) cache, but really from the origin server? The background for the question is the Reliable-POST issue and it has been raised that, when the server supplies unique IDs for the client to include in its POST requests, malfunctioning caches would make it possible for two clients to receive the same ID. A way to be absolutely sure that the GET response comes from the origin server would solve that problem. Thoughts? Jan
--- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > Hi, > > in HTTP, is there a way to be sure that a representation received > upon a GET is definitely NOT coming from any (possibly > malfunctioning) cache, but really from the origin server? > The background for the question is the Reliable-POST issue and it has > been raised that, when the server supplies unique IDs for the client > to include in its POST requests, malfunctioning caches would make it > possible for two clients to receive the same ID. There are a number of HTTP response headers which can tell you a lot about the caching of the resource but if you look at the problem from angle you can work with caching and not against it. I would generally prefer that the server did not supply unique IDs to be included in the next post as this smells of maintaining state on the server. Instead Unique IDs can be generated by the client using GUIDs. Paul Prescod has some interesting things to say on the subject of reliable http http://www.prescod.net/reliable_http.html > A way to be absolutely sure that the GET response comes from the > origin server would solve that problem. > > Thoughts? > > > Jan > Eoin http://www.eoinprout.com/
Steve Loughran wrote: > > > Report of this meeting is up online: > http://www.w3.org/2007/04/wsec_report > <http://www.w3.org/2007/04/wsec_report> > > Given the presentations we saw, its a bit of a disappointing writeup; > there are definately some statements about the value of SOAP1.2, > WSDL2.0 and, in particular, WS-A that surprised me, and which went > alongside some misunderstandings of REST > > [...] > so, lots of interesting papers, but this summary is pretty > disappointing. I wonder if this was the actual outcome of the > workshop, or merely the opinons of those who volunteered to write it > up. As it is, it is more a "there are some problems with WS-*, but we > can fix them" rather than some discussion on how best to use REST as > an architecture for behind-the-firewall systems. I suspect it's an awkward time to be on the WS-* side of the house. "REST faithful" is the giveaway characterization, and ironic given the document is published by the World Wide Web Consortium, which has an explicit mission statement for the Web's potential, and that the consortium failed to produce an architecture document of note for Web Services. That said, it's always interesting to watch how the WS-* v REST debate gets played out. I bet those trout jokes aren't so funny anymore. cheers Bill
On 6/10/07, Elliotte Harold <elharo@...> wrote: > Henry Story wrote: > > > JSON is an instance of such myopia. JSON is designed to represent > serialized JavaScript objects, nothing more. It does not work well when > extended beyond that domain. But the span of where it does work well is so large! Mapped collections and lists of values covers a huge solution space, and JSON does it so easily. There is nothing JSON does that XML can't (what is the data equivalent of Turing Complete?), but JSON is much less costly (verbosity, programming weight) than XML for those things. JSON is so good at it that it's become the disruptive technology that is chewing away at the coat tails of XML. John Heintz -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
On Jun 10, 2007, at 5:24 PM, John D. Heintz wrote: > On 6/10/07, Elliotte Harold <elharo@...> wrote: > > Henry Story wrote: > > > > > > JSON is an instance of such myopia. JSON is designed to represent > > serialized JavaScript objects, nothing more. It does not work > well when > > extended beyond that domain. > > (snip) > > There is nothing JSON does that XML can't (what is the data equivalent > of Turing Complete?), but JSON is much less costly (verbosity, > programming weight) than XML for those things. > > JSON is so good at it that it's become the disruptive technology that > is chewing away at the coat tails of XML. Using JSON for anything else but server-to-browser communication is a mistake. Using anything else than JSON for server-to-browser communication is a mistake as well. In short, use the tool that fits the job and don't be indoctrinated by it. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Steve, Point taken. I'm not suggesting JSON should be used everywhere and don't mean to imply that. On 6/10/07, Steve Bjorg <steveb@...> wrote: > On Jun 10, 2007, at 5:24 PM, John D. Heintz wrote: > > On 6/10/07, Elliotte Harold <elharo@...> wrote: > > > Henry Story wrote: > > > > > > > > > JSON is an instance of such myopia. JSON is designed to represent > > > serialized JavaScript objects, nothing more. It does not work > > well when > > > extended beyond that domain. > > > > (snip) > > > > There is nothing JSON does that XML can't (what is the data equivalent > > of Turing Complete?), but JSON is much less costly (verbosity, > > programming weight) than XML for those things. > > > > JSON is so good at it that it's become the disruptive technology that > > is chewing away at the coat tails of XML. > Using JSON for anything else but server-to-browser communication is a > mistake. Using anything else than JSON for server-to-browser > communication is a mistake as well. In short, use the tool that fits > the job and don't be indoctrinated by it. > > - Steve > > -------------- > Steve G. Bjorg > http://www.mindtouch.com > http://www.opengarden.org > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
On Jun 10, 2007, at 9:24 PM, Josh Sled wrote: > Elliotte Harold <elharo@...> writes: >> JSON is an instance of such myopia. JSON is designed to represent >> serialized JavaScript objects, nothing more. It does not work well >> when >> extended beyond that domain. > > Not quite. JSON is the subset of JavaScript that is the simple > notation for > representing structured data. That contains strings, numbers, > booleans, and > lists and maps thereof. If you look around, you'll notice that > pretty much > every programming language has these constructs, and that is not by > coincidence. > > The value of JSON has not much to do with JavaScript, and > everything to do > with generality of structured (and basically Typed) data. So do all functional programming languages. So why JSON instead of ML? JSON has everything to do with ECMAscript. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
On 11.06.2007, at 00:53, eoinprout wrote: > > I would generally prefer that the server did not supply unique IDs to > be included in the next post as this smells of maintaining state on > the server. You need to maintain server state in an case, because you need to remember the processed POSTs, but tht is an inevitable cost of reliable POST (AFAICT). > Instead Unique IDs can be generated by the client using GUIDs. The more I think about the problems with the other approach, the more I tend towards client-provided GUIDs. Assuming a server will allways respond the same way for any re-POST, the effect of multiple clients using the same server-provided GUID is desaster; every cient would just think its own request succeeded. Thanks, Jan > > Paul Prescod has some interesting things to say on the subject of > reliable http http://www.prescod.net/reliable_http.html > >> A way to be absolutely sure that the GET response comes from the >> origin server would solve that problem. >> >> Thoughts? >> >> >> Jan >> > > Eoin http://www.eoinprout.com/ > > > > > > Yahoo! Groups Links > > >
To start a note that a cache sending a stale entity is not necessarily malfunctioning as there are documented special cases (e.g. network connection known to be down) where such a response is reasonable. However in these cases there should always be Warning headers. eoinprout wrote: > I would generally prefer that the server did not supply unique IDs to > be included in the next post as this smells of maintaining state on > the server. The only way to not have state on a server is to turn it off. The question is, is this putting state on a connection or not. I think not. If we consider this to be the current state of the resource from which we access the post handler (i.e. the resource whose represenation contains a <form> in the case of an HTML rep) and that this state changes upon a successful POST to the post-handler, then this is perfectly restful. If the same resource presents us the form (on GET) and handles it (on POST) then we've the advantage of the invalidation rules coming into play, though I'd prefer not to depend upon them too much (since they are little talked about I'd be concerned that there are more things out there not handling them correctly than with other caching mechanisms). > Instead Unique IDs can be generated by the client using GUIDs. I prefer to avoid having the client tell me anything other than what it is the client's job to tell me (i.e. what "real" data should be in the form). > Paul Prescod has some interesting things to say on the subject of > reliable http http://www.prescod.net/reliable_http.html > >> A way to be absolutely sure that the GET response comes from the >> origin server would solve that problem. How "absolute" do you need? If we need to be *absolutely* sure we need to essentially consider this a man-in-the-middle attack by the cache and take necessary precautions. HTTPS will manage this unless it's a trusted client-side cache that is mistrusting. I think it's more reasonable to assume the cache is at least vaguely correct in its handling. We have mechanisms for both client and server to insist upon an entity not being cached.
On 6/11/07, Jan Algermissen <algermissen1971@...> wrote: > The more I think about the problems with the other approach, the more > I tend towards client-provided GUIDs. > > Assuming a server will allways respond the same way for any re-POST, > the effect of multiple clients using the same server-provided GUID is > desaster; every cient would just think its own request succeeded. > And how do you prevent two different clients from picking the same GUID? Given a single server (or even a finite-sized cluster of servers), I can guarantee that GUIDs are unique because the server is the only source of GUIDs and those IDs only have to be unique relative to the server, not globally. If I let clients generate GUIDs, I have no control over how clients pick GUIDs and clients dont generally have a way of coordinating their selection of GUIDs from the space of available GUIDs. Yes, it is possible to construct algorithms for generating unique GUIDs, but as the HTTPLR spec notes, they're hard to get right (and it only takes a faulty implementation on one client to mess everyone else up). --Chuck
On 11.06.2007, at 15:19, Chuck Hinson wrote: > Yes, it is possible to construct algorithms for > generating unique GUIDs, but as the HTTPLR spec notes, they're hard to > get right (and it only takes a faulty implementation on one client to > mess everyone else up). Ok, if that is a goven fact (which it seems to be), the only way is to let the server generate the ID (or POE resource or whatever). What about reducing the propability of the influence of a broken cache by having the client provide a GUID in addition. Then the propability of an 'ID clash' would be the combination of the cache- failure propability and the broken GUID client side algorithm propability. Isn't that sufficiently close to zero? Jan > > --Chuck > > > > Yahoo! Groups Links > > >
eoinprout wrote: > If your getting IDs from a server for use in a upcoming POST isn't the > server maintaining "application" state for the session? > Isn't that unRESTful. What session? It's giving you an ID known not to have been used before. This reflects upon the state of the server. There's no session.
eoinprout wrote: > --- In rest-discuss@yahoogroups.com, Jon Hanna <jon@...> wrote: >> eoinprout wrote: >>> If your getting IDs from a server for use in a upcoming POST isn't the >>> server maintaining "application" state for the session? >>> Isn't that unRESTful. >> What session? >> >> It's giving you an ID known not to have been used before. This reflects >> upon the state of the server. There's no session. >> > It depends on what the IDs are, if they are simply guaranteed unique > identifiers and nothing more then there is no problem. Oh goodness, yes.
I tend to agree that UUIDs [1] in many cases would be a brilliant choice for identifiers in a distributed environment I also share Chuck's worries on loosing control over the identifiers. For instance, if client1 has a crappy UUID generator (I consider anything except v4 with a proper pseudo-random generator to be crappy), then client2 can predict what will reasonably be the next identifier for client1 and form some kind of attack - depending on the weakness of the security model of your REST system and your clients. [1] or GUIDs.. what is really the difference? Isn't universally unique better than globally? :) Think about your fellow Martians! Of course it should be OK to set some kind of restriction on that kind of UUIDs one excepts (easy to fake, but more difficult than just using a proper uuid library). On 11 Jun 2007, at 14:31, eoinprout wrote: > But if you feel that strongly about it then a client can use > http://www.famkruithof.net/uuid/uuidgen > to get GUID. curl -v http://www.famkruithof.net/uuid/uuidgen (..) < HTTP/1.1 200 OK < Date: Mon, 11 Jun 2007 20:45:53 GMT < Server: Apache/2.2.3 (Debian) mod_ssl/2.2.3 OpenSSL/0.9.8c < X-Powered-By: PHP/5.2.0-8+etch4 < Transfer-Encoding: chunked < Content-Type: text/html; charset=ISO-8859-1 Perhaps not the most RESTful service for such :-) -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
Dave Pawson wrote: > That REST as an idea is so full of unclear terms that is not > commonly understood. > The terminology in this thread is (IMHO) a good example of that. > > A result I can see is that REST suffers from the confusion > and will do so until some effort is put into clarification > and subsequent agreed documentation. > > Then when some term (from Roys dissertation or not) comes up, > a reference can be made to the documentation, rather than > restart a permathread on the list again. > Or agreement reached and the term added. > > The recent O'Reilly book is a good start. I don't think it's enough. Ah, I totally agree. Have you see the two new parallel initiatives toward that end? -- restpatterns.org -- simplewebservices.org -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Steve Bjorg wrote:
> There are "kind of" two ways. The first one is standard, the
> second one is an extension of the first using uri templates
> (which are still in draft).
>
> 1) In the first case, we use a simple form:
> <form action="http://server.com/quote" method="get"> <input
> type="text" name="product" value="hairspray" /> <input
> type="text" name="quantity" value="10" /> </form>
>
> This form describes how to compose uris which look like this:
> http://server.com/quote?product=hairspray&quantity=10
>
> 2) In the second case, we use a uri templates to describe a
> richer set of uris that can be built:
> <form action="http://server.com/quote/{product}?quantity={quantity}"
> method="get">
> <input text="name" name="product" value="hairspray" /> <input
> type="text" name="quantity" value="10" /> </form>
>
> This form describes how to compose uris which look like this:
> http://server.com/quote/hairspray?quantity=10
Exactly! FYI, I've proposing that HTML WG support URI Templates in HTML
Forms:
http://blog.welldesignedurls.org/2007/01/11/proposing-uri-templates-for-webf
orms-2/
> >> B.) URL Creation - The process of constructing URLs based
> >> upon observation, pattern recognition, guessing, or other non-
> >> hypermedia based process.
> >> C.) URL Construction - The unspecified process of
> assembling
> >> URLs either by URL composition or URL creation.
> >>
> >
> > What about URL selection - the process of selecting the URL
> to be used
> > from a provided list?
>
> Uri selection falls is the degenerate case of uri
> composition, where the template already contains all the
> necessary information. For simplicity, we can just refer to
> these as "uris" and not require further explanation.
I'm okay either way, but it make provide clarity to have a specific term.
FWIW.
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
Chuck Hinson wrote: > > So here are two proposals for the adoption of terminology > to clarify > > this issue moving forward. > > > > #1: > > > > A.) URL Composition - The process of using hypermedia and > > templates for URLs to construct URLs for resources. > > Can you expand on this? Are there two different ways to do > composition - one with hypermedia and one with templates, or > is it one way that requires the combination of two components? What I was proposing, combining several other's input, was that we nominate the term "URL Composition" to specifically mean "Using hypermedia with templates (typically URI Templates) to discover and assemble URLs" and not anything else (accepting that my phraseology might need wordsmithing or other clarification.) > > B.) URL Creation - The process of constructing URLs > > based upon observation, pattern recognition, guessing, > > or other non-hypermedia based process. > > C.) URL Construction - The unspecified process of > > assembling URLs either by URL composition or URL > > creation. > > What about URL selection - the process of selecting the URL > to be used from a provided list? Isn't that simply hypermedia as it's been known? Having term for that for orthogonality might make sense, is that what you meant? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On 6/10/07, Dave Pawson <dave.pawson@...> wrote: > > That REST as an idea is so full of unclear terms that is not commonly > understood. > The terminology in this thread is (IMHO) a good example of that. > > A result I can see is that REST suffers from the confusion and will do so > until some effort is put into clarification and subsequent agreed documentation. > > Then when some term (from Roys dissertation or not) comes up, a reference > can be made to the documentation, rather than restart a permathread on > the list again. > Or agreement reached and the term added. Dave, On the simplewebservices.org wiki, I have decomposed parts of Roy's dissertation in order to start building a REST vocabulary, see http://simplewebservices.org/index.php?title=Special:Allpages Regards, Alan Dean http://thoughtpad.net/alan-dean
I'm fairly new to the idea of using REST principles for designing the internals of a web application. It feels right to me, but I'm now thinking more about nice qualities like caching and namespace design. 1. Let's say I have an obvious resource; user photos. And expose it like this: ../users/ID/photos When an request comes in, the identity of the requester is used to authorize access to only certain photos. It would be nice if the resulting representation could be cached and remain (relatively) secure. One approach would be to include a difficult to guess token in the URI with a short cache life. But where to put the token? A query string param seems natural, but some web accelerators & caches ignore URLs with query parameters (i.e. google web accelerator). I guess too often they have side effects (or expose private data). Any other ideas? Seems like this would be a common problem. 2. Simpler question following the same example. What would be a RESTful way to add ratings to photos. Maybe just a PUT on something like: ../users/ID/photos/ID/ratings I wanted to avoid a deep URI hierarchy, but this seems to fit. Am I heading in the right direction?
> JavaScript (and its syntax) has way more users and much more
> visibility.
Agreed. I would use whatever variant was the simplest and that worked
well for the given context. We use JSON for internal, server to server
communication because we find it to be easier to use and happens to
work well in the browser arena.
Brandon
On 6/10/07, Josh Sled <jsled@...> wrote:
> Steve Bjorg <steveb@...> writes:
>
> > On Jun 10, 2007, at 9:24 PM, Josh Sled wrote:
> >> The value of JSON has not much to do with JavaScript, and everything to do
> >> with generality of structured (and basically Typed) data.
> >
> > So do all functional programming languages. So why JSON instead of ML?
> > JSON has everything to do with ECMAscript.
>
> Good point. There is something to be said for being very well known; in an
> abstract competition between ECMAscript and ML ... well, it should be
> obvious: JavaScript (and its syntax) has way more users and much more
> visibility. But I'll not that I was more-so responding to XML
> vs. structured-data languages (N3, JSON, ...) than different langs
> themselves.
>
> --
> ...jsled
> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
>
>
1 caching Where do you want the retrieved representation cached - on an intermediary (caching proxy), on the client used by the user agent or both? 2 adding ratings Just use POST. You can post to the photo directly, or post to a 'photo ratings' resource specific to that photo a) POST /users/ID/photos/ b) POST /users/ID/photos/ratings If multiple clients will be posting ratings, you likely will be aggregating each contribution/submission so that's sort of like creating (or contributing to) another resource. Is the example you provide "../users/ID/photos/ID/ratings" have two "ID" segments because each represent two different user identifiers? > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Brad Schick > Sent: Sunday, June 10, 2007 11:41 PM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] RESTful design questions > > I'm fairly new to the idea of using REST principles for > designing the internals of a web application. It feels right > to me, but I'm now thinking more about nice qualities like > caching and namespace design. > > 1. Let's say I have an obvious resource; user photos. And > expose it like this: > > ../users/ID/photos > > When an request comes in, the identity of the requester is > used to authorize access to only certain photos. It would be > nice if the resulting representation could be cached and > remain (relatively) secure. One approach would be to include > a difficult to guess token in the URI with a short cache > life. But where to put the token? > > A query string param seems natural, but some web accelerators > & caches ignore URLs with query parameters (i.e. google web > accelerator). I guess too often they have side effects (or > expose private data). > > Any other ideas? Seems like this would be a common problem. > > 2. Simpler question following the same example. What would be > a RESTful way to add ratings to photos. Maybe just a PUT on > something like: > > ../users/ID/photos/ID/ratings > > I wanted to avoid a deep URI hierarchy, but this seems to > fit. Am I heading in the right direction? > > > > > Yahoo! Groups Links > > >
Chugging through Leonard and Sam's new book, I came across the example of the S3 list of buckets resource. Obviously this resource is user-specific; it also has a single URL for all users. That is, the resource is "the list of buckets for the current user". In the case of S3 the current user is indirectly specified via the Authorization: header. The book also mentions that an Allow: header could vary depending on the Authorization: header, for example if the current user has the ability to read but not update a resource, you'd see only Allow: GET OPTIONS HEAD. I recall a discussion on this list however where this design (Allow: varies by requesting user) was said to violate the HTTP specification. I cannot now find this discussion however. So I'll just throw this open: (1) Is it fine for a resource's bits to change per user, if it's defined in a user-relative way? So, is "Current Location of requesting user" a valid resource retrievable via HTTP? I think the answer is a noncontroversial yes, but wanted to double check before asking the followup: (2) Is it allowable for the Allow: header to reflect the metadata about what operations the requesting user can perform on the resource? -John
On 11/06/07, Mike Schinkel <mikeschinkel@...> wrote: > Have you see the two new parallel initiatives toward that end? > > -- restpatterns.org > -- simplewebservices.org No I hadn't. Thanks for the links. for me the metric will be when this list starts to reference them on a regular basis. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
On 11/06/07, Brad Schick <schickb@...> wrote: > 1. Let's say I have an obvious resource; user photos. And expose it > like this: > > ../users/ID/photos > > When an request comes in, the identity of the requester is used to > authorize access to only certain photos. It would be nice if the > resulting representation could be cached and remain (relatively) > secure. One approach would be to include a difficult to guess token in > the URI with a short cache life. But where to put the token? In the O'Reilly REST book, there's the idea of addressability, that each photo is addressable by a unique URL. Wouldn't your idea defeat that? One day my photo is at www.example.com/dave/photos/365/token1, then the next it's at www.example.com/dave/photos/365/token2, or whereever you put the token? I think that would really mess with the user view of your service. If each photo has a unique identifying string, its URL, which doesn't change, then they are predictable and addressable? regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
John Panzer wrote: > (1) Is it fine for a resource's bits to change per user, if > it's defined in a user-relative way? So, is "Current > Location of requesting user" a valid resource retrievable via > HTTP? I think the answer is a noncontroversial yes, but > wanted to double check before asking the followup: My two cents is "no." If one needed a generic URL I think it should redirect to a specific one. Not sure what status code. But I would also be interested in hearing any counter arguments if there are any... -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Brad Schick wrote: > 1. Let's say I have an obvious resource; user photos. And expose it > like this: > > ../users/ID/photos > > When an request comes in, the identity of the requester is used to > authorize access to only certain photos. It would be nice if the > resulting representation could be cached and remain (relatively) > secure. One approach would be to include a difficult to guess token in > the URI with a short cache life. But where to put the token? And then it isn't cachable any more. Note that Cache-Control: no-cache doesn't mean something doesn't get cached, but that the cache is not used without checking the origin. Alternatively a max-age of 0 and a must-revalidate will also mean the origin server gets checked. Hence you can insist upon your authentication headers being passed all the way through. If allowing copies on an intermediate server is not secure enough for your needs then Cache-Control: private means only private caches will be used. Finally (not relevant to you, but fills out the caching vs. privacy spectrum) Cache-Control: no-store is what you want if security needs mean all caches must be mistrusted. > A query string param seems natural, but some web accelerators & > caches ignore URLs with query parameters (i.e. google web > accelerator). I guess too often they have side effects (or expose > private data). Read-ahead caches have no header information about links until after they've done a GET so they have to avoid such URIs to avoid mal-functioning sites where GET have side-effects. Web caches only avoid caching reponses to such URIs as a default if there are no relevant headers (Expires, Cache-control, etc) for the same reasons. If you have the headers there then the caching will happen. The reason for avoiding the quesry string param is that you are creating a bunch of different resources (one for each URI) that all do the same job. This needless increase in complexity has several negative side-effects. > 2. Simpler question following the same example. What would be a > RESTful way to add ratings to photos. Maybe just a PUT on something like: > > ../users/ID/photos/ID/ratings > > I wanted to avoid a deep URI hierarchy, but this seems to fit. Am I > heading in the right direction? Why avoid a deep heirarchy? From the REST point of view /users/ID/photos/ID/ratings and photos?user=x&ratings=y and /daf/fsdw3ur082/dsf are all the same. From a wider point of view, the above seems fine to me, what's your concern?
On Jun 12, 2007, at 12:32 PM, Jon Hanna wrote: > Finally (not relevant to you, but fills out the caching vs. privacy > spectrum) Cache-Control: no-store is what you want if security needs > mean all caches must be mistrusted. How much trust can one put in this restriction in the real world, i.e. do all relevant caching implementations honor this appropriately? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
John Panzer wrote: > (1) Is it fine for a resource's bits to change per user, if it's defined > in a user-relative way? So, is "Current Location of requesting user" a > valid resource retrievable via HTTP? I think the answer is a > noncontroversial yes, but wanted to double check before asking the followup: A controversial "yes, but" from me. If you indicate the resource will change on the bases of the Authorization header with Vary: Authorization and add a bit of caution in how you allow it to be cached (i.e. you don't) because that may catch out some caches in practice, then fine. The "but": In practice it will probably be more performant and reliable if this is done by having a bunch of resources for each user (which only that user - and perhaps admins or other users who would have a reason to look at them in a different context - can access) and the "current location" resource redirected to this through a 303 (303 because that is the cleanest choice for when the target exists in its own right as a separate resource). The "controversial": I know that others have disagreed with me on this issue in the past. > (2) Is it allowable for the Allow: header to reflect the metadata about > what operations the requesting user can perform on the resource? This isn't entirely clear to me. However if you have a resource per user as I suggest above and all users can either do GET, POST and PUT on it or else do nothing, then you can statically but those in an allow header and the user in question will get that response while others get a 401.
Stefan Tilkov wrote: > On Jun 12, 2007, at 12:32 PM, Jon Hanna wrote: > >> Finally (not relevant to you, but fills out the caching vs. privacy >> spectrum) Cache-Control: no-store is what you want if security needs >> mean all caches must be mistrusted. > > How much trust can one put in this restriction in the real world, > i.e. do all relevant caching implementations honor this appropriately? Absolutely none whatsoever. Cache-Control: no-store prevents accidental exposure (trusted cache releases data due to malfunction or compromise). If that's not good enough you need to bypass it entirely by using end-to-end privacy such as is available in HTTPS so the cache doesn't see anything in it.
Jon Hanna wrote: > John Panzer wrote: > >> (1) Is it fine for a resource's bits to change per user, if it's defined >> in a user-relative way? So, is "Current Location of requesting user" a >> valid resource retrievable via HTTP? I think the answer is a >> noncontroversial yes, but wanted to double check before asking the followup: >> > > A controversial "yes, but" from me. > > If you indicate the resource will change on the bases of the > Authorization header with Vary: Authorization and add a bit of caution > in how you allow it to be cached (i.e. you don't) because that may catch > out some caches in practice, then fine. > > The "but": > > In practice it will probably be more performant and reliable if this is > done by having a bunch of resources for each user (which only that user > - and perhaps admins or other users who would have a reason to look at > them in a different context - can access) and the "current location" > resource redirected to this through a 303 (303 because that is the > cleanest choice for when the target exists in its own right as a > separate resource). > > The "controversial": > > I know that others have disagreed with me on this issue in the past. > > >> (2) Is it allowable for the Allow: header to reflect the metadata about >> what operations the requesting user can perform on the resource? >> > > This isn't entirely clear to me. However if you have a resource per user > as I suggest above and all users can either do GET, POST and PUT on it > or else do nothing, then you can statically but those in an allow header > and the user in question will get that response while others get a 401. > > > > What if others are allowed to read (GET) but not update (PUT), but the owner of the resource is allowed to do both?
John Panzer wrote:
> What if others are allowed to read (GET) but not update (PUT), but the
> owner of the resource is allowed to do both?
Hmm. On thinking about this, it *seems* to me that the semantics of
Allow are much more about what methods can be performed on a resource
than what semantics can be performed on a resource by a given user.
So if we have, say, guest and resource-owner users and the former can
only GET while the latter can GET and PUT the Allow header should always
be Allow: GET, PUT and the following responses must be generated:
Guest Owner
GET 2xx/3xx 2xx/3xx
PUT 401 2xx
POST 405 405
FOO 405 405
Note that while Allow: is optional with most responses it's mandatory
with 405s.
If you need more information about who can do what than this it should
probably be expressed in an entity.
Reading the docs at http://activemq.apache.org/ I noticed they have a REST API. So I clicked on http://activemq.apache.org/rest.html to read more and, perhaps not surprisingly, found that it's pretty broken REST. Adding a message to the queue seems fine... they give as an example a queue at http://www.acme.com/queue/orders/input and you can add a new message in the queue by POSTing to that URL. Fine. Consuming a message from the queue, though, seems problematic. They allow either GET or DELETE on the *same* URL to pop a message from the queue. They are aware that this is wrong: "Note that strict REST requires that GET be a read only operation; so strictly speaking we should not use GET to allow folks to consume messages. Though we allow this as it simplifies HTTP/DHTML/Ajax integration somewhat." ... but they don't seem to understand *how* wrong it is: * it's not REST that says GET is a read-only operation; it's HTTP. So their HTTP implementation is broken. Sadly seems to be pretty common. * DELETE on a URL representing a queue means you want to delete the entire queue, not a single message! The reason I'm writing to this list is that I thought it was an interesting case and I couldn't immediately think of better solution. Has anybody thought of a good way to model a queue? You could of course do a GET on the queue, returning a list of available messages, then DELETE one of those - but that leads to concurrency problems. -- Paul Winkler http://www.slinkp.com
Paul Winkler <pw_lists@...> writes: > * it's not REST that says GET is a read-only operation; it's HTTP. So > their HTTP implementation is broken. Sadly seems to be pretty common. > > * DELETE on a URL representing a queue means you want to delete the > entire queue, not a single message! > > The reason I'm writing to this list is that I thought it was an > interesting case and I couldn't immediately think of better solution. > Has anybody thought of a good way to model a queue? > > You could of course do a GET on the queue, returning a list of > available messages, then DELETE one of those - but that leads to > concurrency problems. To retrieve a message from the queue you should do this: POST /queue/ => 201, Location /queue/snapshotIDxxxx GET /queue/snapshotIDxxxx => 200, state of the snapshot, with links to elements DELETE /queue/snapshotIDxxxx/1 => 200, deleted resource This is the transactional model. Of course... it means either: - the queue is a multi-consumer queue - everything in the queue disappears when you create the snapshot. The latter is preferred obviously. -- Nic Ferrier http://www.tapsellferrier.co.uk
Paul Winkler > Reading the docs at http://activemq.apache.org/ I noticed > they have a REST API. So I clicked on > http://activemq.apache.org/rest.html to read more and, > perhaps not surprisingly, found that it's pretty broken REST. Rather ironic given Roy Fielding's high profile role related to Apache... -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Hi, I am curious about people's opinions about the following: Suppose the existence of a media type for purchase orders, e.g. application/order. When I POST an HTTP message of that media type to an HTTP server it seems reasonable to interprete this request as an intent to order something. The request is self- describing and its meaning is independent of the receiver. Especially does the self-descriptiveness protect me as a sender from changes of the receiver, changes that could otherwise assign a different meaning to my request. Now, if I make use of APP, wrapp my oder inside an Atom entry document and POST it to an APP collection, I'd interprete that as a request for 'storage', regardless of the entrie's content being an order. But what if I POST the order to an APP server as media (not using the enrtry envelope) thus basically asking for the same thing as above? Isn't the meaning of the request actually the placement of an order (see above) and not the request for storing my angle brackets order? The '201 Created' I am likely to see in both cases isn't helping much, to check the cients expectations. Maybe this seems far-fetched, but if it the receiver side actually *is* capable of affecting the meaning of my request, then I could send my orders in application/xml right away, effectively using HTTP as transport as opposed to transfer (to achieve coordination of peers). Jan
On Tue, Jun 12, 2007 at 04:34:39PM -0400, Mike Schinkel wrote: > Paul Winkler > > Reading the docs at http://activemq.apache.org/ I noticed > > they have a REST API. So I clicked on > > http://activemq.apache.org/rest.html to read more and, > > perhaps not surprisingly, found that it's pretty broken REST. > > Rather ironic given Roy Fielding's high profile role related to Apache... The Apache project does a lot of things besides the excellent http server. Judging by their java xmlrpc client library, not all things under the Apache umbrella are good. (If you need an xmlrpc client for java, try Redstone instead.) -- Paul Winkler http://www.slinkp.com
On Tue, Jun 12, 2007 at 09:34:41PM +0100, Nic James Ferrier wrote: > Paul Winkler <pw_lists@...> writes: > > The reason I'm writing to this list is that I thought it was an > > interesting case and I couldn't immediately think of better solution. > > Has anybody thought of a good way to model a queue? (snip) > To retrieve a message from the queue you should do this: > > POST /queue/ > => 201, Location /queue/snapshotIDxxxx > > GET /queue/snapshotIDxxxx > => 200, state of the snapshot, with links to elements > > DELETE /queue/snapshotIDxxxx/1 > => 200, deleted resource > > > This is the transactional model. Of course... it means either: > > - the queue is a multi-consumer queue I think that's typically the case. > - everything in the queue disappears when you create the snapshot. > > The latter is preferred obviously. I don't know... if the client dies after creating the snapshot, you've effectively thrown away a bunch of messages. -- Paul Winkler http://www.slinkp.com
It's funny the way these things happen. I said a few things that I'm still surprised people didn't really agree with or pick up on.. http://www.pacificspirit.com/blog/2007/03/06/w3c_web_of_services_for_enterpr ise_computing Cheers, Dave _____ From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Steve Loughran Sent: Saturday, June 09, 2007 2:07 PM To: Rest List Subject: [rest-discuss] Web of Services for Enterprise Computing Report of this meeting is up online: http://www.w3. <http://www.w3.org/2007/04/wsec_report> org/2007/04/wsec_report Given the presentations we saw, its a bit of a disappointing writeup; there are definately some statements about the value of SOAP1.2, WSDL2.0 and, in particular, WS-A that surprised me, and which went alongside some misunderstandings of REST "The original goal of SOAP has much in common with REST and in fact certain interpretations of the specifications cite the major difference being SOAP allows the definition of a method or operation name within the message and REST does not. This is largely due to the fact that REST is specific to HTTP (i.e. HTTP has an action header that can be used for the same purpose, or an associated URI) whereas Web services are multi-protocol and therefore need all information to be contained within the envelope. Some of the fundamental differences result from these different assumptions: REST assumes HTTP while WS-* assumes protocol neutrality. Others result from the fact that HTTP intentionally designs out features of existing IT systems (e.g. session based security, transaction coordination, reliability, etc.) and that the WS-* specifications basically amount to an attempt to put them back in." funniest quotes "the major goal of WS-* is interoperability and if you don't need interoperability you don't really need WS-*." And, after much praise of WS-Addressing and its rapid standardisation (==the reality that there are 3-4 different versions out there, and WS-DM 1.0 depends on two different versions), finally a note that "there was consensus surrounding EPRs was that vendors should show care in using EPRs" so, lots of interesting papers, but this summary is pretty disappointing. I wonder if this was the actual outcome of the workshop, or merely the opinons of those who volunteered to write it up. As it is, it is more a "there are some problems with WS-*, but we can fix them" rather than some discussion on how best to use REST as an architecture for behind-the-firewall systems. -steve
"Mike Schinkel" <mikeschinkel@...> writes: > Paul Winkler >> Reading the docs at http://activemq.apache.org/ I noticed >> they have a REST API. So I clicked on >> http://activemq.apache.org/rest.html to read more and, >> perhaps not surprisingly, found that it's pretty broken REST. > > Rather ironic given Roy Fielding's high profile role related to > Apache... Not really. Apache is a massive umbrella under which much that is rubbish shelters with much that is utterly fantastic. -- Nic Ferrier http://www.tapsellferrier.co.uk
Paul Winkler <pw_lists@...> writes: >> - the queue is a multi-consumer queue > > I think that's typically the case. Then you have to arbitrate who is subscribed and who has, or rather has not, created a snapshot and not remove the item from the queue until everyone has the snapshot. That is possible... I think the best way to do that would be to encode the subscribers into the snapshot resources somehow. Or just have a transactional queue that allows complete arbitary views... so two clients can take snapshots and other client can continue committing... Whatever. I'm pretty sure the answer is transactional snapshots. I looked at the apache project you mention last year and was going to do something with transactions... but until someone pays me to just do cool stuff I'm stuck with having to earn a living /8-> If there are any billionaire's out there reading REST discuss then please consider me for the "someone to employ to sit around doing mad scientist stuff in the attic" position in your household. -- Nic Ferrier http://www.tapsellferrier.co.uk
How about somebody go and implement an example of a RESTful approach to a queue using only HTTP and document it. If you like Java, you can use this age old project as a starting point http://sourceforge.net/projects/destiny/ On 6/12/07, Nic James Ferrier <nferrier@...> wrote: > Paul Winkler <pw_lists@...> writes: > > >> - the queue is a multi-consumer queue > > > > I think that's typically the case. > > Then you have to arbitrate who is subscribed and who has, or rather > has not, created a snapshot and not remove the item from the queue > until everyone has the snapshot. > > That is possible... I think the best way to do that would be to encode > the subscribers into the snapshot resources somehow. > > Or just have a transactional queue that allows complete arbitary > views... so two clients can take snapshots and other client can > continue committing... > > Whatever. I'm pretty sure the answer is transactional snapshots. > > > > > I looked at the apache project you mention last year and was going to > do something with transactions... but until someone pays me to just do > cool stuff I'm stuck with having to earn a living /8-> > > If there are any billionaire's out there reading REST discuss then > please consider me for the "someone to employ to sit around doing mad > scientist stuff in the attic" position in your household. > > -- > Nic Ferrier > http://www.tapsellferrier.co.uk > > > > Yahoo! Groups Links > > > >
On 6/12/07, Dave Orchard <orchard@...> wrote: > > It's funny the way these things happen. I said a few things that I'm still surprised people didn't really agree with or pick up on.. > > http://www.pacificspirit.com/blog/2007/03/06/w3c_web_of_services_for_enterprise_computing (excerpt): "... I suggested WADL (Web application description language), to help with enterprises and the desperate perl/python hacker building stronger typed REST services. And for the flipside, I suggested improved SOAP to URI/XML bindings so SOAP/WSDL services would be more easily consumable by REST clients. There were 2 votes (including mine) for doing WADL, and 2 votes against doing WADL ..." Personally, I haven't bought into the WADL thing. My focus on how to make the response self-descriptive of the contained hypermedia. To that end, I am employing the HTTP-in-RDF vocabulary in my shopping use case: http://simplewebservices.org/index.php?title=Shopping I am still 'working the dough' of the messages right now, but you can get an inkling of where I am going from what's there already. Regards, Alan Dean http://thoughtpad.net/alan-dean
On 6/12/07, Dave Orchard <orchard@...> wrote: > > It's funny the way these things happen. I said a few things that I'm still surprised people didn't really agree with or pick up on.. > > http://www.pacificspirit.com/blog/2007/03/06/w3c_web_of_services_for_enterprise_computing (excerpt): "... I'm still surprised that there wasn't more support for technical ways of bringing the two architectures together ..." I'm not surprised. I really can't imagine MS, Sun and IBM accepting that the spec stack now built on top of SOAP should be unwound. Given that, is the REST community willing to reformat itself in the image of WS-* ? ... I imagine not. I look on them as two different tools in the box. Each has strengths and weaknesses. I speak as a professional developer, involved in "web services" in the most general sense for the better part of 7 years: from the pre-WS-* days of POX-over-HTTP, through to WS-* and REST. I implement both WS-* and RESTful services in my professional life - selecting technology according to the appropriateness to the task at hand. I don't see the need for unification or "bringing them together", any more than I look at a real-world toolbox and think to myself "hmmm ... what I really want is a more hammer-like screwdriver." Regards, Alan Dean http://thoughtpad.net/alan-dean
There were a lot of users and ".com"s in the room. MSFT wasn't there. So it wasn't about "MS, Sun and IBM"s goals, nor reformating REST. I was hoping to hear more on the bridging of architectures from the non software vendors, but didn't. A great deal of the reason why there are 2 very different tools in the toolbox is because nobody has provided joins between the tools. Even when I have different tools in the toolbox, sometimes they can have things in common to make it easier to use the essential parts of each tool. Like having a metric to imperial conversion or tools with both units. Cheers, dave > -----Original Message----- > From: Alan Dean [mailto:alan.dean@...] > Sent: Tuesday, June 12, 2007 2:54 PM > To: orchard@... > Cc: Steve Loughran; Rest List > Subject: Re: [rest-discuss] Web of Services for Enterprise Computing > > On 6/12/07, Dave Orchard <orchard@...> wrote: > > > > It's funny the way these things happen. I said a few > things that I'm still surprised people didn't really agree > with or pick up on.. > > > > > http://www.pacificspirit.com/blog/2007/03/06/w3c_web_of_services_for_e > > nterprise_computing > > (excerpt): > "... I'm still surprised that there wasn't more support for > technical ways of bringing the two architectures together ..." > > I'm not surprised. I really can't imagine MS, Sun and IBM > accepting that the spec stack now built on top of SOAP should > be unwound. > > Given that, is the REST community willing to reformat itself > in the image of WS-* ? ... I imagine not. > > I look on them as two different tools in the box. Each has > strengths and weaknesses. I speak as a professional > developer, involved in "web services" in the most general > sense for the better part of 7 years: > from the pre-WS-* days of POX-over-HTTP, through to WS-* and > REST. I implement both WS-* and RESTful services in my > professional life - selecting technology according to the > appropriateness to the task at hand. > > I don't see the need for unification or "bringing them > together", any more than I look at a real-world toolbox and > think to myself "hmmm ... > what I really want is a more hammer-like screwdriver." > > Regards, > Alan Dean > http://thoughtpad.net/alan-dean >
On 6/12/07, Dave Orchard <orchard@...> wrote: > There were a lot of users and ".com"s in the room. MSFT wasn't there. So > it wasn't about "MS, Sun and IBM"s goals, nor reformating REST. > > I was hoping to hear more on the bridging of architectures from the non > software vendors, but didn't. > > A great deal of the reason why there are 2 very different tools in the > toolbox is because nobody has provided joins between the tools. Even when I > have different tools in the toolbox, sometimes they can have things in > common to make it easier to use the essential parts of each tool. Like > having a metric to imperial conversion or tools with both units. Dave, Are there any resources / write-ups about the kind of tools that you have in mind? It sounds like my impressions of what you mean is incorrect and that may be because there is no context on the referenced blog entry and the Meeting report does not elaborate on that aspect. The reason why I mentioned MS, Sun and IBM is that it is hard to imagine a significant delta to the WS-* spec stack without their buy-in, even if they weren't in the room - but if your comments do not imply any specification changes to WS-*, it isn't important. I have to admit that I wonder what the WS <-> REST interop requirement is. No obvious use cases come to mind - although that could simply be that it's nearly midnight here and I'm knackered ;-) Alan
Mike Dierken wrote: > > > How about somebody go and implement an example of a RESTful approach > to a queue using only HTTP and document it. http://www.dehora.net/doc/httplr/draft-httplr-01.html#rfc.section.9 I'm sure I would change some things on reflection. cheers Bill
Paul Winkler wrote: > Consuming a message from the queue, though, seems problematic. > They allow either GET or DELETE on the *same* URL to pop a message > from the queue. > > They are aware that this is wrong: > "Note that strict REST requires that GET be a read only operation; so > strictly speaking we should not use GET to allow folks to consume > messages. Though we allow this as it simplifies HTTP/DHTML/Ajax > integration somewhat." > > ... but they don't seem to understand *how* wrong it is: > > * it's not REST that says GET is a read-only operation; it's HTTP. So > their HTTP implementation is broken. Sadly seems to be pretty common. > > * DELETE on a URL representing a queue means you want to delete the > entire queue, not a single message! > > The reason I'm writing to this list is that I thought it was an > interesting case and I couldn't immediately think of better solution. > Has anybody thought of a good way to model a queue? I seem to recall saying something like that to James Strachan ages ago. In fairness to the ActiveMQ crew they rolled that out a few years ago now, and people shouldn't underestimate how awkward mapping a queue onto HTTP can be (especially if you don't have PUT and DELETE to pop and push). The sanest approach to modeling queues I know of over HTTP is syndication technology; ie force the data structure towards a list. And if you need to do pubsub, give everyone their own URL to pop. cheers Bill
Hi Jan,
* Jan Algermissen <algermissen1971@...> [2007-06-12 22:45]:
> Suppose the existence of a media type for purchase orders, e.g.
> application/order. When I POST an HTTP message of that media
> type to an HTTP server it seems reasonable to interprete this
> request as an intent to order something.
only if you somehow know that the resource you are POSTing to
processes orders. RFC 2616 is quite clear on this matter, I
think:
The actual function performed by the POST method is
determined by the server and is usually dependent on the
Request-URI.
> The request is self-describing and its meaning is independent
> of the receiver. Especially does the self-descriptiveness
> protect me as a sender from changes of the receiver, changes
> that could otherwise assign a different meaning to my request.
“Self-describing” means that you don’t need any data outside of
the full request in order to understand the request; it doesn’t
mean you can actually know what the server is going to with it
unless the server has made a particular promise to you.
> But what if I POST the order to an APP server as media (not
> using the enrtry envelope) thus basically asking for the same
> thing as above? Isn't the meaning of the request actually the
> placement of an order (see above) and not the request for
> storing my angle brackets order?
No. In AtomPP, the server makes a promise to you, by way of the
service document, that POST requests to particular URIs will be
interpreted as storage requests and processed in a particular
manner. And that’s what the server subsequently does.
> The '201 Created' I am likely to see in both cases isn't
> helping much, to check the cients expectations.
I quote from RFC 2616 again:
The action performed by the POST method might not result in a
resource that can be identified by a URI. In this case,
either 200 (OK) or 204 (No Content) is the appropriate
response status, depending on whether or not the response
includes an entity that describes the result.
> Maybe this seems far-fetched, but if it the receiver side
> actually *is* capable of affecting the meaning of my request,
> then I could send my orders in application/xml right away,
> effectively using HTTP as transport as opposed to transfer (to
> achieve coordination of peers).
The receiver is not affecting the meaning of your request because
your request does not have such finely specified semantics in the
first place. POST is POST and means “process this somehow”;
adding a particular media type into the mix does not make it mean
anything more specific.
RFC 2616 is definite on the matter: what the POST means depends
on the URI of the resource you are requesting. You need to know
beforehand how that resource will process your request, and this
happens by way of a promise the server makes, which it describes
to you using a suitable hypermedia format.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
* A. Pagaltzis <pagaltzis@...> [2007-06-13 01:15]: > RFC 2616 is definite on the matter: what the POST means depends > on the URI of the resource you are requesting. I have to correct myself here: in fact it doesn’t even promise that much – it says “usually”, but leaves it entirely up to the server on a case by case basis. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Tue, Jun 12, 2007 at 11:37:14PM +0100, Bill de hOra wrote: > Mike Dierken wrote: > > > > > > How about somebody go and implement an example of a RESTful approach > > to a queue using only HTTP and document it. > > > http://www.dehora.net/doc/httplr/draft-httplr-01.html#rfc.section.9 > > I'm sure I would change some things on reflection. That looks pretty well thought out... but it also takes three requests to do a single pop. That strikes me as a lot of overhead. I wonder if this case is just better served by an RPC or POX approach. -- Paul Winkler http://www.slinkp.com
Sure, here's some things: http://www.pacificspirit.com/blog/2004/10/13/wsget http://www.pacificspirit.com/blog/2005/03/01/wsrest_continued_do_we_need_an_ http_transfer_soap_binding_and_simplified_wsdl http://www.w3.org/2001/tag/doc/ws-uri.html http://www.pacificspirit.com/blog/2004/12/20/ruminations_on_wsaddressing_and _transfer_protocols Some bits about mapping XML Qnames to URIs (say for EPR Ref Params) http://www.pacificspirit.com/blog/2004/04/29/binding_qnames_to_uris A quote from http://www.w3.org/2001/tag/2006/12/12-tagmem-minutes#item02 where I failed to convince the TAG to push for any solution "DO: I think the community is missing some technical pieces which would allow people minting identifying EPRs to mint URIs instead ... particularly QName-to-URI ... Also, just because the toolmakers of the WS stack don't buy into WebArch, doesn't mean that their customers wouldn't like some of it ... There are people out there with a more wholistic view, and we should try to help them" Sir Tim Berners-Lee's response immediately after "TBL: We owe the world a statement about the loss of network effects from having a parallel web ... People have the right to define independent information spaces, we can't stop them ... We have some ideas about possible routes towards convergence, but the TAG can't make that happen ... It's not a topic on which the TAG itself should spend much effort" And so the TAG has effectively stopped looking at the REST vs WS-* separate architectures. One obvious follow on interpretation is that because the TAG has said little agin WS-Transfer, the W3C may have nothing against WS-Transfer, and other follow-on specs. Again, I was hoping for something from the customer or other non-vendor sides to bring back to the TAG and others, but that didn't happen. And I remain surprised by that. Cheers, Dave > -----Original Message----- > From: Alan Dean [mailto:alan.dean@...] > Sent: Tuesday, June 12, 2007 3:34 PM > To: orchard@... > Cc: Steve Loughran; Rest List > Subject: Re: [rest-discuss] Web of Services for Enterprise Computing > > On 6/12/07, Dave Orchard <orchard@...> wrote: > > There were a lot of users and ".com"s in the room. MSFT > wasn't there. > > So it wasn't about "MS, Sun and IBM"s goals, nor reformating REST. > > > > I was hoping to hear more on the bridging of architectures from the > > non software vendors, but didn't. > > > > A great deal of the reason why there are 2 very different > tools in the > > toolbox is because nobody has provided joins between the > tools. Even > > when I have different tools in the toolbox, sometimes they can have > > things in common to make it easier to use the essential > parts of each > > tool. Like having a metric to imperial conversion or tools > with both units. > > Dave, > > Are there any resources / write-ups about the kind of tools > that you have in mind? It sounds like my impressions of what > you mean is incorrect and that may be because there is no > context on the referenced blog entry and the Meeting report > does not elaborate on that aspect. > > The reason why I mentioned MS, Sun and IBM is that it is hard > to imagine a significant delta to the WS-* spec stack without > their buy-in, even if they weren't in the room - but if your > comments do not imply any specification changes to WS-*, it > isn't important. > > I have to admit that I wonder what the WS <-> REST interop > requirement is. No obvious use cases come to mind - although > that could simply be that it's nearly midnight here and I'm > knackered ;-) > > Alan >
Why would RPC take fewer round trips? > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Paul Winkler > Sent: Tuesday, June 12, 2007 5:40 PM > To: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Message queues > > On Tue, Jun 12, 2007 at 11:37:14PM +0100, Bill de hOra wrote: > > Mike Dierken wrote: > > > > > > > > > How about somebody go and implement an example of a > RESTful approach > > > to a queue using only HTTP and document it. > > > > > > http://www.dehora.net/doc/httplr/draft-httplr-01.html#rfc.section.9 > > > > I'm sure I would change some things on reflection. > > That looks pretty well thought out... but it also takes three > requests to do a single pop. That strikes me as a lot of overhead. > > I wonder if this case is just better served by an RPC or POX approach. > > -- > > Paul Winkler > http://www.slinkp.com > > > > Yahoo! Groups Links > > >
Jon Hanna wrote: > John Panzer wrote: > >> (1) Is it fine for a resource's bits to change per user, if it's defined >> in a user-relative way? So, is "Current Location of requesting user" a >> valid resource retrievable via HTTP? I think the answer is a >> noncontroversial yes, but wanted to double check before asking the followup: >> > > A controversial "yes, but" from me. > > If you indicate the resource will change on the bases of the > Authorization header with Vary: Authorization and add a bit of caution > in how you allow it to be cached (i.e. you don't) because that may catch > out some caches in practice, then fine. > > Side note: Actually you can allow caching with a 0 ttl and a must-revalidate directive. It's not nearly as good as simple time based caching of course, but it's still useful in some cases. Also you could possibly allow private caches only though this is... tricky. > The "but": > > In practice it will probably be more performant and reliable if this is > done by having a bunch of resources for each user (which only that user > - and perhaps admins or other users who would have a reason to look at > them in a different context - can access) and the "current location" > resource redirected to this through a 303 (303 because that is the > cleanest choice for when the target exists in its own right as a > separate resource). > These could possibly be cached (privately) but incurs an extra network round trip. And the client isn't allowed to cache the new Location: either. More seriously, this could cause difficulties if you're doing anything other than GET on the resource. If for example you want to POST to "the current user's set of buckets" to create a new bucket, you definitely don't want your POST turned into a GET. I think the only other reasonable choice is 307 but that one still has problems with non-GETs; clients are supposed to confirm with users before redirecting a POST even with a 307, which is tremendously annoying in this context. (Curl for example requires a special flag to do this.) ] So it'd be interesting to compare both approaches in practice. -John
So, I'm going to debate this a bit since I think it's important. Jon Hanna wrote: > John Panzer wrote: >> What if others are allowed to read (GET) but not update (PUT), but >> the owner of the resource is allowed to do both? > Hmm. On thinking about this, it *seems* to me that the semantics of > Allow are much more about what methods can be performed on a resource > than what semantics can be performed on a resource by a given user. Yep. Although if it's allowed to define a resource as "the set of buckets owned by the current user" I don't see why you couldn't define a resource as "a view of buckets owned by jpanzer filtered for the current user". So if you allow the former I think you could legitimately say that the semantics of Allow could be honored if the resource is defined appropriately. However I think you're disallowing the former. > > So if we have, say, guest and resource-owner users and the former can > only GET while the latter can GET and PUT the Allow header should > always be Allow: GET, PUT and the following responses must be generated: > > Guest Owner > GET 2xx/3xx 2xx/3xx > PUT 401 2xx > POST 405 405 > FOO 405 405 > > Note that while Allow: is optional with most responses it's mandatory > with 405s. > > If you need more information about who can do what than this it should > probably be expressed in an entity. > In practice, it turns out to be very useful to know what's allowed and what's not without doing extra requests (some of which have side effects). If for no other reason than avoiding extra network hops. So having to request an additional entity to get 4 bits of information is in practical terms a very hard sell. I suspect that people will instead go with an X- custom header. John
Joe Gregorio wrote about the amzon queue service in an XML article.
http://www.xml.com/pub/a/2005/01/05/restful.html
But the pop operation use GET and DELETE and there may be concurrency issues
to solve.
Is it possible to solve the problem using a new resource in the service
: the next available entry of a queue ?
queue : /queue/{queue_id}
queue entry : /queue/{queue_id}/entry/{entry_id}
next entry : /queue/{queue_id}/next
The client would PUT on the next available entry URI to pop the queue.
The server will return the representation of the entry and modify its
state to save the fact that it has been poped.
I don't know if we need to send a representation in the body of the PUT
request.
The server may be configured to delete the entry or let the client do it
with the URI of the entry retrieved in the response of the PUT request.
What do you think about the above solution ?
-- benoit fleury
On 12/06/07, Paul Winkler <pw_lists@...> wrote: > Reading the docs at http://activemq.apache.org/ I noticed they have a > REST API. So I clicked on http://activemq.apache.org/rest.html to > read more and, perhaps not surprisingly, found that it's pretty broken > REST. > > Adding a message to the queue seems fine... they give as an example > a queue at http://www.acme.com/queue/orders/input and you can add a > new message in the queue by POSTing to that URL. Fine. > > Consuming a message from the queue, though, seems problematic. > They allow either GET or DELETE on the *same* URL to pop a message > from the queue. I don't see anything wrong with that (possibly timing issues when two clients consume the same item). GET to find out the queue entry. DELETE to remove it (being processed). Is it 'wrong' to use two verbs on the same resource? If so why? DELETE on a queue perhaps should delete the whole queue. DELETE on an entry I'd hope would DELETE only that entry. regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
On 13.06.2007, at 01:08, A. Pagaltzis wrote:
> The receiver is not affecting the meaning of your request because
> your request does not have such finely specified semantics in the
> first place. POST is POST and means process this somehow;
> adding a particular media type into the mix does not make it mean
> anything more specific.
Yes, makes sense.
It follows though, that an HTTP POST request is never sufficient to
communicate the clients intent[1] and that
the intent must be communicated separately as part of the request[2].
Now, if the intent is somewhat orthogonal to the payload media type,
I'd conclude that the actual media type
does not matter[3] and one is rather free in what kinds of e.g.
purchase orders one sends.
If I pick a receiver based on the information that it is an 'order
processor' it effectively does not matter if I send
my order as UBL or as a JPEG scan.
Doh - seems I had my brain entirely confused about the role the media
type plays in an HTTP interaction.
Jan
[1] aka: the clients change of state with regard to a coordination
process (e.g. the clients intent to place an order)
[2] if one does not simply rely on the nature of the receiver being
sufficiently stable between the client choosing
the receiver and the actual request
[3] except for being the how-to-process-the-payload instruction for
the receiverJan Algermissen wrote: > It follows though, that an HTTP POST request is never sufficient to > communicate the clients intent[1] and that > the intent must be communicated separately as part of the request[2]. And/or part of the response that made the client aware of the URI it is using in the POST.
John Panzer wrote: > Yep. Although if it's allowed to define a resource as "the set of > buckets owned by the current user" I don't see why you couldn't define a > resource as "a view of buckets owned by jpanzer filtered for the current > user". So if you allow the former I think you could legitimately say > that the semantics of Allow could be honored if the resource is defined > appropriately. However I think you're disallowing the former. Changing Allow smells very bad to me. Entity metadata can change from response to response based on various factors that allow the server to decide what response it will make. A client (including an intermeditary) can make certain assumptions and not make certain assumptions based on that. Resource metadata can change only when the state of the resource changes. A client (including an intermeditary) can make different assumptions based on that. If a resource allows PUT then the resource allows PUT. > In practice, it turns out to be very useful to know what's allowed and > what's not without doing extra requests (some of which have side > effects). If for no other reason than avoiding extra network hops. So > having to request an additional entity to get 4 bits of information is > in practical terms a very hard sell. I suspect that people will instead > go with an X- custom header. Why request an additional entity? Why not have the relevant information in the entity that informs the client about the URI of the resource?
John Panzer wrote: > Side note: Actually you can allow caching with a 0 ttl and a > must-revalidate directive. It's not nearly as good as simple time based > caching of course, but it's still useful in some cases. Also you could > possibly allow private caches only though this is... tricky. I would avoid private caching in this case for the simple reason that we're being a bit more "quirky" than some caches' assumptions may allow for. I'm convinced we'd be okay per the spec, but not that we'd be okay in practice if we allowed such private caching. > More seriously, this could cause difficulties if you're doing anything > other than GET on the resource. If for example you want to POST to "the > current user's set of buckets" to create a new bucket, you definitely > don't want your POST turned into a GET. I think the only other > reasonable choice is 307 but that one still has problems with non-GETs; > clients are supposed to confirm with users before redirecting a POST > even with a 307, which is tremendously annoying in this context. (Curl > for example requires a special flag to do this.) TBH, I'd rather model it as follows: GET: 1. My client requests "Buckets for current user (user is Jon)". 2. Client redirected to "Buckets for Jon". 3. Client requests "Buckets for Jon". All is well on GET. POST/PUT/DELETE: 1. Client requests action be performed on "Buckets for Jon", knowing that I am Jon (this is reasonable state for the client to hold, it's the client that manages who is logged in after all) and knowing the resource for "Buckets for Jon" from above.
Benoit suggests:
> queue : /queue/{queue_id}
> queue entry : /queue/{queue_id}/entry/{entry_id}
> next entry : /queue/{queue_id}/next
>
> The client would PUT on the next available entry URI to pop the queue. The
> server will return the representation of the entry and modify its state to
> save the fact that it has been poped.
This would seem to depend on what happens if you do a 2nd PUT
/queue/{queue_id}/next?
Will you get the same entry as the first PUT? If not, then you just broke
PUTs idempotency requirement. If you do get the same entry, then what is the
"already popped" state change used for?
Tricky problem!
Andrzej Jan Taramina
Chaeron Corporation: Enterprise System Solutions
http://www.chaeron.com
On 6/13/07, Dave Orchard <orchard@...> wrote: > > And so the TAG has effectively stopped looking at the REST vs WS-* separate > architectures. One obvious follow on interpretation is that because the TAG > has said little agin WS-Transfer, the W3C may have nothing against > WS-Transfer, and other follow-on specs. > > Again, I was hoping for something from the customer or other non-vendor > sides to bring back to the TAG and others, but that didn't happen. And I > remain surprised by that. well, I see you are still listed as part of the TAG, so may be in a position of influence. I will take Stuart Williams aside the next time I see him at the coffee machine and express my disappointment on the process. I have already expressed some deep unhappiness about the way that WS-A got to 1.0 without any tests, which is the complete anathema of develoment from a test centric viewpoint (if your standard has no tests, how do you assess compliance?), but left with the impression that the TAG views testing as an implementation detail, not a better way to describe specifications: a formal description you can code against. -steve (off to get that coffee now)
On 6/12/07, Alan Dean <alan.dean@...> wrote: > On 6/12/07, Dave Orchard <orchard@...> wrote: > > > > It's funny the way these things happen. I said a few things that I'm still surprised people didn't really agree with or pick up on.. > > > > http://www.pacificspirit.com/blog/2007/03/06/w3c_web_of_services_for_enterprise_computing > > (excerpt): > "... I'm still surprised that there wasn't more support for technical > ways of bringing the two architectures together ..." > > I'm not surprised. I really can't imagine MS, Sun and IBM accepting > that the spec stack now built on top of SOAP should be unwound. What makes you think they have a choice? If you look at the MS story, they are embracing it, admittedly by showing how WCF can be tweaked to handle it. As for IBM, well, they make lots of money off SOA, and so are fully committed to it as a concept. But at the same time. other players in the enteprise -MS, Sun, BEA, oracle- must see that money and want a slice of it. If IBM can retain it by sticking with SOA, then REST is a way to level the playing field, just a SOAP was a response to EJB. Sun are working on WADL, which leaves IBM as the odd one out. Yet they are deeply involved in Atom and such like, so parts of the company are ready if/when the rest of the org changes direction. -steve
On 6/12/07, Mike Schinkel <mikeschinkel@...> wrote: > Paul Winkler > > Reading the docs at http://activemq.apache.org/ I noticed > > they have a REST API. So I clicked on > > http://activemq.apache.org/rest.html to read more and, > > perhaps not surprisingly, found that it's pretty broken REST. > > Rather ironic given Roy Fielding's high profile role related to Apache... > Nobody in Apache imposes architectural rules on any project; it is up to the separate communities to choose the areas of intest, and the implementation. If you want to fix the RESTy API, your contributions may be welcome. Start with a nice public critique of it that I can link to, then get involved and write something better. -steve Apache member and member of the Apache Ant and Apache WS manage committees.
On Jun 13, 2007, at 2:50 PM, Steve Loughran wrote: > On 6/12/07, Alan Dean <alan.dean@...> wrote: > > On 6/12/07, Dave Orchard <orchard@...> wrote: > > > > > > It's funny the way these things happen. I said a few things > that I'm still surprised people didn't really agree with or pick up > on.. > > > > > >http://www.pacificspirit.com/blog/2007/03/06/ > w3c_web_of_services_for_enterprise_computing > > > > (excerpt): > > "... I'm still surprised that there wasn't more support for > technical > > ways of bringing the two architectures together ..." > > > > I'm not surprised. I really can't imagine MS, Sun and IBM accepting > > that the spec stack now built on top of SOAP should be unwound. > > What makes you think they have a choice? > > If you look at the MS story, they are embracing it, admittedly by > showing how WCF can be tweaked to handle it. > > As for IBM, well, they make lots of money off SOA, and so are fully > committed to it as a concept. But at the same time. other players in > the enteprise -MS, Sun, BEA, oracle- must see that money and want a > slice of it. If IBM can retain it by sticking with SOA, then REST is a > way to level the playing field, just a SOAP was a response to EJB. > > "In an interview at IBM's Impact 2007 conference, Jerry Cuomo, CTO for IBM WebSphere, noted that he was recently named an IBM Fellow and it is changing the way he thinks about how WebSphere fits into the Web services and service-oriented architecture (SOA) world. "One of the things you're supposed to do as a Fellow is be thoughtful and not just react," he said. That may explain why he did not react to questions about the more controversial aspects of Java technology in the same way as some others in the Java platform industry do. He is taking the long view beyond Java to innovations using REST and Web- oriented architecture (WOA) or as he terms it "SOA on the Web." This is from http://searchwebservices.techtarget.com/qna/ 0,289202,sid26_gci1257544,00.html?track=sy80 Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ > Sun are working on WADL, which leaves IBM as the odd one out. Yet they > are deeply involved in Atom and such like, so parts of the company are > ready if/when the rest of the org changes direction. > > -steve > >
[ Attachment content not displayed ]
On Tue, Jun 12, 2007 at 09:52:34PM -0700, Mike Dierken wrote:
> Why would RPC take fewer round trips?
I meant something like:
service = SomeFlavorOfRPCServer('http://example.org/endpoint')
message = service.popMessage()
But now I realize this lacks the "message reconciliation" feature from
Bill's proposal - the server doesn't know if the client successfully
handled the message. So we need to add something like:
service.acknowlegeReceipt(message.id)
So now we've got two hits. Bill's proposal has three; so far I still
haven't seen a RESTful way to spell "retrieve a resource from a
collection, but I don't care what the resource's actual URL is, I just
want the oldest one in the collection". (Or newest if it's a stack
rather than a queue.)
-PW
--
Paul Winkler
http://www.slinkp.com
On Wed, Jun 13, 2007 at 08:21:57AM +0100, Dave Pawson wrote: > On 12/06/07, Paul Winkler <pw_lists@...> wrote: > > >Reading the docs at http://activemq.apache.org/ I noticed they have a > > REST API. So I clicked on http://activemq.apache.org/rest.html to > > read more and, perhaps not surprisingly, found that it's pretty broken > > REST. > > > > Adding a message to the queue seems fine... they give as an example > > a queue at http://www.acme.com/queue/orders/input and you can add a > > new message in the queue by POSTing to that URL. Fine. > > > > Consuming a message from the queue, though, seems problematic. > > They allow either GET or DELETE on the *same* URL to pop a message > > from the queue. > > I don't see anything wrong with that (possibly timing issues when two > clients > consume the same item). > > GET to find out the queue entry. > DELETE to remove it (being processed). That's not what their API does... > Is it 'wrong' to use two verbs on the same resource? > If so why? It's certainly wrong if one of them is GET and it's neither safe nor idempotent. > DELETE on a queue perhaps should delete the whole queue. Exactly. They use DELETE on the queue URL to fetch and delete an unspecified child resource, not the whole queue. Worse, they allow GET for the same operation, so GET and DELETE are equivalent. I don't see a single-request solution except to use POST or maybe PUT, and then it smells more like some flavor of POX or RPC. -- Paul Winkler http://www.slinkp.com
On Jun 13, 2007, at 7:23 AM, Paul Winkler wrote:
> On Tue, Jun 12, 2007 at 09:52:34PM -0700, Mike Dierken wrote:
> > Why would RPC take fewer round trips?
>
> I meant something like:
>
> service = SomeFlavorOfRPCServer('http://example.org/endpoint')
> message = service.popMessage()
>
> But now I realize this lacks the "message reconciliation" feature from
> Bill's proposal - the server doesn't know if the client successfully
> handled the message. So we need to add something like:
>
> service.acknowlegeReceipt(message.id)
>
> So now we've got two hits. Bill's proposal has three; so far I still
> haven't seen a RESTful way to spell "retrieve a resource from a
> collection, but I don't care what the resource's actual URL is, I just
> want the oldest one in the collection". (Or newest if it's a stack
> rather than a queue.)
How about using different media types?
To add an item:
POST /queue
Content-Type: application/xml (or whatever a queue item is)
-->
201 Created
Location: http://server.com/queue/itemXYZ
To request to pop an item:
POST /queue
Content-Type: application/w-www-form-urlencoded
AckTTL=60 (where AckTTL is the time to acknowledge the queue item or
it becomes available again)
-->
200 Ok
http://server.com/queue/itemABC
To delete a popped item:
DELETE /queue/itemABC
...
- Steve
--------------
Steve G. Bjorg
http://www.mindtouch.com
http://www.opengarden.org
Nic James Ferrier <nferrier@...> writes: > - everything in the queue disappears when you create the snapshot. Which may not be suitable when there are multiple consumers waiting. The problem here is snapshot preserves the view of the whole queue much like serializable isolation level in SQL. This makes sharing parts of the same view and allowing modifications to those parts tricky and may not even be possible without blocking other consumers. But Nic's solution prompts one to think about personalisation. A consumer does not have to have a snapshot of the whole queue. Giving a pesonalised URI which represent the next pending message for a consumer accomplishes: - Allows concurrent queue popping. - No sharing parts of the same view. This allows each consumer to modify the part presented to it, and also eases the server implementation. The only thing required is for the consumer to have its own consumer ID. This is not an unreasonable requirement and can be implemented easily (GUID or other simpler mechanisms). A pop is implemented as a sequence of GET and DELETE. A GET causes the server to returns a previously bound message to the consumer, if there is any. If there isn't any, return the next unbound message. GET /queue/pending_message?consumer_id=12345 ==> 200, MSG #32 GET /queue/pending_message?consumer_id=12345 ==> 200, MSG #32 GET /queue/message/32 ==> 200, MSG #32 DELETE /queue/message/32 ==> 204 GET /queue/pending_message?consumer_id=12345 ==> 200, MSG #34 DELETE /queue/message/34 ==> 204 POST /queue/pending_message/filter?consumer_id=12345 <== <filter>Return oldest message first</filter> ==> 204 GET /queue/pending_message?consumer_id=12345 ==> 200, MSG #1 YS.
FWIW, the typical MSMQ pattern is that the 'pop' is destructive. when
a client 'reads' a message, it is permanently removed from the stack.
it is assumed that, if the client encounters an error processing the
message, it is the client's responsibility to: 1) push the msg back
onto the stack (if rights allow); 2) push the msg to another (error)
stack; or 3) report an error.
with that in mind, it seems a POST to /queues/{q_id}/pop/ would return
the msg in the body.
returning a 302 to /queues/{q_id}/{msg_id} is an option, but might
confuse the situation since the POST would be seen as 'removing the
msg from the queue.
mamund
On 6/13/07, Paul Winkler <pw_lists@...> wrote:
> On Tue, Jun 12, 2007 at 09:52:34PM -0700, Mike Dierken wrote:
> > Why would RPC take fewer round trips?
>
> I meant something like:
>
> service = SomeFlavorOfRPCServer('http://example.org/endpoint')
> message = service.popMessage()
>
> But now I realize this lacks the "message reconciliation" feature from
> Bill's proposal - the server doesn't know if the client successfully
> handled the message. So we need to add something like:
>
> service.acknowlegeReceipt(message.id)
>
>
> So now we've got two hits. Bill's proposal has three; so far I still
> haven't seen a RESTful way to spell "retrieve a resource from a
> collection, but I don't care what the resource's actual URL is, I just
> want the oldest one in the collection". (Or newest if it's a stack
> rather than a queue.)
>
>
> -PW
>
>
>
> --
>
> Paul Winkler
> http://www.slinkp.com
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
On 6/13/07, Paul Winkler <pw_lists@...> wrote: > [snip] > > I don't see a single-request solution except to use POST or maybe PUT, > and then it smells more like some flavor of POX or RPC. Don't feel bad about it: "... Like most architectural choices, the stateless constraint reflects a design trade-off. The disadvantage is that it may decrease network performance by increasing the repetitive data (per-interaction overhead) sent in a series of requests, since that data cannot be left on the server in a shared context. In addition, placing the application state on the client-side reduces the server's control over consistent application behavior, since the application becomes dependent on the correct implementation of semantics across multiple client versions ..." http://roy.gbiv.com/pubs/dissertation/rest_arch_style.htm#sec_5_1_3 "... The trade-off, however, is that a cache can decrease reliability if stale data within the cache differs significantly from the data that would have been obtained had the request been sent directly to the server ..." http://roy.gbiv.com/pubs/dissertation/rest_arch_style.htm#sec_5_1_4 "... The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs ..." http://roy.gbiv.com/pubs/dissertation/rest_arch_style.htm#sec_5_1_5 Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
Chris Burdess <dog@...> writes: > Yohanes Santoso wrote: >> Nic James Ferrier <nferrier@...> writes: >> > - everything in the queue disappears when you create the snapshot. >> >> Which may not be suitable when there are multiple consumers >> waiting. > > I'm still unclear why the queue cannot be implemented using DELETE to > pop the queue entry. Because that would have been too easy, right? :) DELETE has to be idempotent. If it returns you message A the first time, then if there is nothing else happening afterwards, another DELETE should return message A again too. So, how do you advance to the next message? You would have to do something so that the next DELETE returns a different message, but that means DELETE alone is not doing the popping. Furthermore, even if DELETE does the popping, the network is not reliable. There will be cases where you need to retry that operation. How do you do that if the popping is not retryable? Popping as a single action, like presented in most Queue library, is not a problem because data transfer is reliable within a process (if it is not, you have a whole other, much more serious problem). > You GET the current state of the queue, which contains links to the > individual queue entry URLs. Then you pick the first queue entry > you're interested in and DELETE it. Either you are successful, and > the server returns 200 and the entity representing the queue entry, > or some other client has beaten you to it, you get a 404, and you > have to try the next entry. This is similar to spinlock: repeatedly try to lock until you succeed. > What's wrong with this? Spinlock is wonderful if there is a high probability that you'll get the lock soon, real soon. Otherwise it will start to generate unnecessary load. In networking, you can't afford unnecessary load because of high latency and limited bandwidth. Furthermore, since network is not reliable, how do you re-get the message if the DELETE response to you is interrupted mid-way (network is not reliable)? See httplr which is quite complex for something that is conceptually simple because it acknowledges that network is not reliable. YS.
Yohanes Santoso wrote: > Nic James Ferrier <nferrier@...> writes: > > - everything in the queue disappears when you create the snapshot. > > Which may not be suitable when there are multiple consumers > waiting. I'm still unclear why the queue cannot be implemented using DELETE to pop the queue entry. You GET the current state of the queue, which contains links to the individual queue entry URLs. Then you pick the first queue entry you're interested in and DELETE it. Either you are successful, and the server returns 200 and the entity representing the queue entry, or some other client has beaten you to it, you get a 404, and you have to try the next entry. What's wrong with this? -- Chris Burdess
Paul Winkler wrote: > I don't see a single-request solution except to use POST or maybe PUT, > and then it smells more like some flavor of POX or RPC. POSTing POX isn't necessarily unRESTful (depending on a few other things). It isn't getting the best out of REST if a solution getting more out of the full range of methods is available instead, but it's not actually unRESTful either. Depending on just what the queue's semantics are meant to be, I'd probably do something with POST here methinks.
Yohanes Santoso wrote: > Chris Burdess <dog@...> writes: > > I'm still unclear why the queue cannot be implemented using DELETE to > > pop the queue entry. > > Because that would have been too easy, right? :) > > DELETE has to be idempotent. If it returns you message A the first > time, then if there is nothing else happening afterwards, another > DELETE should return message A again too. No. Another DELETE will return 404 because the message has been deleted. Note, we don't DELETE the queue URL, we DELETE the message URL. Idempotence doesn't mean that the DELETE will always return the same status code and/or entity on successive requests. It means that the state of the server will be the same whether you have 1 DELETE or more than 1 DELETE of the same URL. > So, how do you advance to > the next message? You would have to do something so that the next > DELETE returns a different message, but that means DELETE alone is not > doing the popping. As I said the queue provides a list of the message URLs. > Furthermore, even if DELETE does the popping, the network is not > reliable. There will be cases where you need to retry that > operation. How do you do that if the popping is not retryable? Fair point. -- Chris Burdess
Chris Burdess <dog@...> writes: > Yohanes Santoso wrote: >> Chris Burdess <dog@...> writes: > Note, we don't DELETE the queue URL, we DELETE the message URL. Apology. The above and > As I said the queue provides a list of the message URLs. made me realised that I misread your previous email. Somehow, I split your email into two parts, one proposing a DELETE on the queue URL and the other on retrying repeatedly. The only excuse I can offer is it was lunch hour and I was multi-tasking between replying and eating. Some of my blood was being 307-ed to the stomach. YS.
* Jan Algermissen <algermissen1971@...> [2007-06-13 09:40]: > If I pick a receiver based on the information that it is an > 'order processor' it effectively does not matter if I send my > order as UBL or as a JPEG scan. Well, if the processor can deal with a JPEG scan, then yeah, it doesn’t matter. I don’t know how likely it is to find such processors in the wild, though. :-) > Doh - seems I had my brain entirely confused about the role the > media type plays in an HTTP interaction. It tells the recipient how to interpret the body. If I get application/xhtml+xml in response to a request, trying to process it according to the spec for image/png will probably be in vain. It also gives intermediaries a clue about what’s inside without them having to try to divine it from the body, so that they could conceivably transparently transcode XML response bodies to another encoding, or could recompress images in known formats on the fly to help out mobile or dial-up clients. * Jon Hanna <jon@...> [2007-06-13 11:10]: > Jan Algermissen wrote: > > It follows though, that an HTTP POST request is never > > sufficient to communicate the clients intent[1] and that the > > intent must be communicated separately as part of the > > request[2]. > > And/or part of the response that made the client aware of the > URI it is using in the POST. What Jon said. Note that same said part of the response (or something nearby) will probably also tell the client what media type the server will accept; c.f. AtomPP and the app:accept element. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 6/13/07, Stefan Tilkov <stefan.tilkov@...> wrote: > On Jun 13, 2007, at 2:50 PM, Steve Loughran wrote: > > > > > As for IBM, well, they make lots of money off SOA, and so are fully > > committed to it as a concept. But at the same time. other players in > > the enteprise -MS, Sun, BEA, oracle- must see that money and want a > > slice of it. If IBM can retain it by sticking with SOA, then REST is a > > way to level the playing field, just a SOAP was a response to EJB. > > > > > > "In an interview at IBM's Impact 2007 conference, Jerry Cuomo, CTO > for IBM WebSphere, noted that he was recently named an IBM Fellow and > it is changing the way he thinks about how WebSphere fits into the > Web services and service-oriented architecture (SOA) world. "One of > the things you're supposed to do as a Fellow is be thoughtful and not > just react," he said. That may explain why he did not react to > questions about the more controversial aspects of Java technology in > the same way as some others in the Java platform industry do. He is > taking the long view beyond Java to innovations using REST and Web- > oriented architecture (WOA) or as he terms it "SOA on the Web." well, the puprose of corporate R&D is to lead the company into the future. If that can be done with the existing product roadmap, all well and good. But if the company is going off in the wrong direction, somebody needs to be ready for when they discover this... -steve
Hi Bill,
I have put a version with references here:
http://betathoughts.blogspot.com/2007/06/brief-history-of-consensus-2pc-and.html
enjoy
Mark
>
> Wow. If you have a weblog, please cut and past that into it. It
> explained things very clearly (I can never seem to find anything on this
> list easily after it's been said)
>
> cheers
> Bill
>
>
>
> Yahoo! Groups Links
>
>
>
>
Actually, it turns out that repeated DELETE requests on the same URI should return the same status code each time. The status code reflects whether or not the resource is gone at completion of the request. In two subsequent requests this is true - it doesn't matter that it happened to be gone at the start of the second request... But I think you can do a conditional DELETE request to be sure the client is the only one processing that message. However, there is no UNDELETE so if the client fails to process the message, it either needs to resubmit the complete message, or mark the message as needing processing again (perhaps posting the message URI to a retry-queue or something) So, it comes down to getting a list of messages and attempting to reserve one (or more) for processing. If the reservation can also return the content of the message then that cuts out a request. If the reservation is accepted on the server, but doesn't make it to the client then either the reservation should be idempotent (or specific to the client) or that message becomes an orphan and times out eventually. If the completion of processing needs an acknowledgement (a 'message removal') then there's an additional request. The reservation is a way to avoid collisions among multiple consumers. There may be other ways to avoid collisions (each client chooses a hash for itself and only processing the set of messages that have the same hash value) but I don't think this is as simple or guaranteed as individual message reservation. On 6/13/07, Chris Burdess <dog@...> wrote: > Yohanes Santoso wrote: > > Chris Burdess <dog@...> writes: > > > I'm still unclear why the queue cannot be implemented using DELETE to > > > pop the queue entry. > > > > Because that would have been too easy, right? :) > > > > DELETE has to be idempotent. If it returns you message A the first > > time, then if there is nothing else happening afterwards, another > > DELETE should return message A again too. > > No. Another DELETE will return 404 because the message has been deleted. > Note, we don't DELETE the queue URL, we DELETE the message URL. > > Idempotence doesn't mean that the DELETE will always return the same status > code and/or entity on successive requests. It means that the state of the > server will be the same whether you have 1 DELETE or more than 1 DELETE > of the same URL. > > > So, how do you advance to > > the next message? You would have to do something so that the next > > DELETE returns a different message, but that means DELETE alone is not > > doing the popping. > > As I said the queue provides a list of the message URLs. > > > Furthermore, even if DELETE does the popping, the network is not > > reliable. There will be cases where you need to retry that > > operation. How do you do that if the popping is not retryable? > > Fair point. > -- > Chris Burdess > > > > Yahoo! Groups Links > > > >
Mike Dierken wrote: > Actually, it turns out that repeated DELETE requests on the same URI > should return the same status code each time. > The status code reflects whether or not the resource is gone at > completion of the request. In two subsequent requests this is true - > it doesn't matter that it happened to be gone at the start of the > second request... All I can see on this is that the effect upon the server of two DELETEs is (barring interactions with other actions between the first and the second) is the same, not that the response must be the same. Can you cite please.
Sorry, I can't find any authoritative reference. I guess it's just my opinion then, based on the assumption that the status indicates the final state of the indicated resource of the request, rather than the status of a 'locate, activate, operate' series of steps. On 6/13/07, Jon Hanna <jon@...> wrote: > Mike Dierken wrote: > > Actually, it turns out that repeated DELETE requests on the same URI > > should return the same status code each time. > > The status code reflects whether or not the resource is gone at > > completion of the request. In two subsequent requests this is true - > > it doesn't matter that it happened to be gone at the start of the > > second request... > > All I can see on this is that the effect upon the server of two DELETEs > is (barring interactions with other actions between the first and the > second) is the same, not that the response must be the same. > > Can you cite please. > >
>>>>> "Jon" == Jon Hanna <jon@...> writes:
Jon> All I can see on this is that the effect upon the server of
Jon> two DELETEs is (barring interactions with other actions
Jon> between the first and the second) is the same, not that the
Jon> response must be the same.
It's an idempotent method (2616, 9.1.2) which implies this imo.
--
All the best,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
On 13/06/2007 15:23, Paul Winkler wrote: > So now we've got two hits. Bill's proposal has three; so far I still > haven't seen a RESTful way to spell "retrieve a resource from a > collection, but I don't care what the resource's actual URL is, I just > want the oldest one in the collection". (Or newest if it's a stack > rather than a queue. What about a having a URI representing the end of the queue, e.g. http://example.com/queue/last A DELETE on that URI redirects with a 303 to the current last item in the queue which is removed and returned in the body. Ian -- work - http://www.talis.com/platform play - http://iandavis.com/blog callto:ian_davis
On 6/13/07, Jon Hanna <jon@...> wrote: > All I can see on this is that the effect upon the server of two DELETEs > is (barring interactions with other actions between the first and the > second) is the same, not that the response must be the same. /me puts on his Roy hat, in Roy's absence It's not even that. DELETE is idempotent because it is defined to be idempotent. That doesn't mean the server can't do non-idempotent things when processing a request, it only means that ****the client didn't request them**** s/DELETE/GET/ & s/idempotent/safe/ Mark.
Benoit said: > Yep I forgot the idempotency requirement of PUT. You're right. > Thanks. > > So use POST ? Or is this new resource a bad idea ? Part of the problem I see with a GET followed by a DELETE, is dependent on what the client does with the GETted queue entry. What if another client also did a GET, completed their processing and then issued the DELETE successfully, thus popping the message? Does that mean that the first client should not have done any processing on the message they did a GET on, since the POP (DELETE) will fail? As it is typical, the answer is "it depends" on the application specifics. What if you did something like this instead: 1) GET the next message (using a nice, generic URL that points to the "next" message resource, eg. somequeue/next), which also returns a specific URL for that message, for example, somequeue/msg/42. 2) PUT to the returned message URL, somequeue/msg/42, a state change that "grants ownership" of that message to a client id. PUT somequeue/msg/42?clientID=me. A 200 means you now own that message, and can go ahead and process it. A 409 (Conflict) means someone else got it before you did, and you should sh*tcan the message and try to get the next one again. 3) When you are done processing the message, you do a DELETE somequeue/msg/42?clientID=me, to pop it permanently. A 200 means it's gone. A 403 (Forbidden) would be returned if you didn't already "own" the specified message. Would that come closer to providing a scenario that is restful, but also provides message queuing semantics? Obviously at the expense of extra HTTP calls. The server could potentially (if the app scenario warrants it) implement a timeout so that if a DELETE was not received in some specified time interval, the message "ownership" would be released. In that case a DELETE after the timeout could return a 408 (Request Timeout), though that might be stretching the meaning of a 408 somewhat. Similarily, once you "own" a message, you could also release the ownership by using a PUT, rather than a DELETE, effectively requeuing the message for others to potentially GET. Also, doing a GET only, is like a "peek" in the the queue, without the corresponding pop, which is a common use case/feature in many message queuing systems. Just some quick thoughts on the messaging topic.... Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
I like that definition of a resourcew, but how would you handle the problem of lost messages? John Heintz On 6/13/07, Ian Davis <lists@...> wrote: > On 13/06/2007 15:23, Paul Winkler wrote: > > So now we've got two hits. Bill's proposal has three; so far I still > > haven't seen a RESTful way to spell "retrieve a resource from a > > collection, but I don't care what the resource's actual URL is, I just > > want the oldest one in the collection". (Or newest if it's a stack > > rather than a queue. > > What about a having a URI representing the end of the queue, e.g. > > http://example.com/queue/last > > A DELETE on that URI redirects with a 303 to the current last item in > the queue which is removed and returned in the body. > > Ian > > -- > work - http://www.talis.com/platform > play - http://iandavis.com/blog > callto:ian_davis > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
[ Attachment content not displayed ]
[ Attachment content not displayed ]
> Outside of MOM I don't know of any generalized support for many readers. You mean, many readers (of a single resource/message) but only one consumer/processor of that one resource?
"John D. Heintz" <jheintz@...> writes: > I like that definition of a resourcew, but how would you handle the > problem of lost messages? > John Heintz > On 6/13/07, Ian Davis <lists@...> wrote: >> On 13/06/2007 15:23, Paul Winkler wrote: >> > So now we've got two hits. Bill's proposal has three; so far I still >> What about a having a URI representing the end of the queue, e.g. >> http://example.com/queue/last >> A DELETE on that URI redirects with a 303 to the current last item in >> the queue which is removed and returned in the body. >> Ian What John said and also how do you handle the fact that one client's last is not necessarily another's. At the very least clients need to identify themselves to the server. I used a personalised URI in another email to meet that requirement. Once you do that, a pop can be accomplished with at least two messages. YS.
I'm not trying to specify how many writers, it shouldn't matter. I'm thinking about a primitive operation that supports readers. One writer alone could drive hundreds of readers. Just to pick an example: one writer pushes positive integers and many readers check for prime numbers. Not a very useful example on it's own, but that single writer could drive many, many readers. This thread (and the reliable messaging proposals for REST) have addressed many issues (like network failures. Perhaps ordered messaging isn't covered already. I don't know if a single generalized primitive operation is good enough to address this issue. This seems like a more variable problem than "many writers". One of the things I've been thinging about is how to re-implement the EIP patterns in RESTful systems, like Competing Consumers http://www.enterpriseintegrationpatterns.com/CompetingConsumers.html This seems like a tough problem without another method. I haven't found a really good example of what to base that on though. Reading the GFS paper I realized that the append operation on GFS files supports many writers, but GFS doesn't have an operation for many readers. John Heintz On 6/13/07, Mike Dierken <dierken@...> wrote: > > Outside of MOM I don't know of any generalized support for many readers. > > You mean, many readers (of a single resource/message) but only one > consumer/processor of that one resource? > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
Yohanes Santoso wrote: > What John said and also how do you handle the fact that one client's > last is not necessarily another's. At the very least clients need to > identify themselves to the server. I used a personalised URI in > another email to meet that requirement. If you use "personalised" (i.e. different) URLs, then these represent different queue resources. If you really mean to have a personalised queue, use the same URL to reference the queue, and authentication to identify the client. -- Chris Burdess
Mark Baker wrote: > How about a TAKE method? Still suffers from the unreliable-network problem, that the connection may be lost during transmission of the entity, losing the whole message. -- Chris Burdess
Mike Dierken wrote: > Sorry, I can't find any authoritative reference. > I guess it's just my opinion then, based on the assumption that the > status indicates the final state of the indicated resource of the > request, rather than the status of a 'locate, activate, operate' > series of steps. From what I can see the status reflects the state of the action rather than the resource. Of course the state of the resource after the action has taken place will be one of the influences on this, but the state of the resource before the action is another, as with a few other factors (the most obvious case being server-error - not a factor that affects idempotence [if there is an error or expiry we can no longer depend upon idempotency and this is stated] but one that affects response status. Berend de Boer wrote: > Jon> All I can see on this is that the effect upon the server of > Jon> two DELETEs is (barring interactions with other actions > Jon> between the first and the second) is the same, not that the > Jon> response must be the same. > > It's an idempotent method (2616, 9.1.2) which implies this imo. I definitely see how it implies this, but not how it entails this. Idempotence (as definied in RFC 2616 9.1.2) refers only to the side-effects of N > 0 identical requests - that is to say, the state of any affected resources after the request was made - not the details of the response (which at the very least should always have a different date header unless the 2 requests were less than a second apart). Consider a DELETE to http://example.net/del 1st request: Status: 200/202/204 State of server after action: http://example.net/del doesn't exists State of client after action: Knows that http://example.net/del doesn't exist (or may not in the future if response is 202). State of intermeditaries: Any record of http://example.net/del marked as stale. 2nd request: Status: 202/404/410 State of server after action: http://example.net/del doesn't exists State of client after action: Knows that http://example.net/del doesn't exist (or may not in the future if response is 202). State of intermeditaries: Any record of http://example.net/del marked as stale. nth request: Status: 202/404/410 State of server after action: http://example.net/del doesn't exists State of client after action: Knows that http://example.net/del doesn't exist (or may not in the future if response is 202). State of intermeditaries: Any record of http://example.net/del marked as stale. The side effect (http://example.net/del is either deleted or marked for later deletion in the case of 202) is the same after n requests as after 1. It satisfies the definition of idempotency as stated. 2. The one case where the nth request could differ from the (n-1)th request is if a 202 from previously was acted upon with the result that the state of http://example.net/del changes from "marked for deletion" to "deleted". However, this would be a case of expiry and/or other operations (since idempotency can be affected by other operations that happen between the (n-1)th and nth request).
On 6/12/07, Jan Algermissen <algermissen1971@...> wrote: > Suppose the existence of a media type for purchase orders, e.g. > application/order. When I POST > an HTTP message of that media type to an HTTP server it seems > reasonable to interprete this > request as an intent to order something. [...] > The '201 Created' I am likely to see in both cases isn't helping > much, to check the cients expectations. Orders invoke a business (not technical) protocol called offer-acceptance. http://en.wikipedia.org/wiki/Offer_and_acceptance A purchase order is an offer to buy. When accepted, a contract is bound. So the best practice would be to send a notification of acceptance (or rejection) back to the client, either in the response, or later to a hyperlink sent in the offer for the acceptance response message.
On 6/6/07, Mark Mc Keown <zzcgumk@...> wrote: > > > One of the most important results in consensus theory is the fact > that consensus is impossible in an asynchronous system with only > one faulty processor, even with a perfect network and where the processor > fail-stops. Basically you cannot tell the difference between a processor > that has failed and one that has stopped. This result is used in > the CAP paper. we use a partition aware tuple space (Anubis) for this kind of thing; it does have heartbeats so it only works on a single site, where multicast costs and delays are low. The nice thing about the design is you can assert facts, then one tick later know all nodes know the fact, two ticks later they know you know, three ticks later you know they know...and so on until you get the levels of mutual knowledge where you can start to make assertions: http://www.hpl.hp.com/techreports/2005/HPL-2005-72.html http://www.hpl.hp.com/techreports/2005/HPL-2005-73.html >I did some work on using Paxos Commit and HTTP together to support >distributed transactions, though the transactions did not have full ACID >properties. http://www.allhands.org.uk/2006/proceedings/papers/624.pdf A paper with Steve Pickles on the list and Savas on the credits. Impressive. EGEE and the like are clearly not the place for Anubis-like synchronisation. -steve
Stian Soiland wrote: > ... > > HEAD /users;current > Authorization: (basic: stain:****) > > 307 Temporary redirect > Location: /users/stain > Vary: Authorization > Cache-Control: private This is a fine pattern but I'm not convinced it's required (or should be). The redirect works for GET and HEAD, and fails for POST, PUT, and DELETE. For cases where clients must always HEAD before doing anything else this isn't an issue, but for others I think it's problematic. (What exactly would you do if you got a POST on /users;current? 310?) I'm still looking for an essential argument one way or another. In the scheme above, the response still varies per user, so you still have to worry about cache control and Vary: Authorization, and you add the headache of another network request and complications with PUT/POST/DELETE. The alternative would be: GET /users;current Authorization: (basic: stain:****) 200 OK Vary: Authorization Cache-control: private, must-revalidate ....data.... Which has the following bonuses: (1) Works with all HTTP methods; (2) Only one network request. What are the minuses, considering that we're disabling proxy caching anyway? (Of course, in serious applications using HTTP Basic Auth the whole thing will be over TLS and so uncacheable by proxies.) I'm also trying to articulate the essential distinction between the above and this: GET /foo Accept-Language: da, en-gb;q=0.8, en;q=0.7 200 OK Vary: Accept-Language ...data... That is, is there an essential architectural style issue with a resource for "the current user, whoever that is"? Or merely practical issues? -John
On 6/14/07, John Panzer <jpanzer@...> wrote: > > > That is, is there an essential architectural style issue with a resource > for "the current user, whoever that is"? Or merely practical issues? In theory, the way to do this would be a Content-Location header with a URI specific to the user. In practice, I'm not sure how well that would work. Maybe the future will be brighter, now that people are actually using HTTP. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Chris Burdess <dog@...> writes: > Yohanes Santoso wrote: >> What John said and also how do you handle the fact that one client's >> last is not necessarily another's. At the very least clients need to >> identify themselves to the server. I used a personalised URI in >> another email to meet that requirement. > > If you use "personalised" (i.e. different) URLs, then these represent > different queue resources. > > If you really mean to have a personalised queue, use the same URL to > reference the queue, and authentication to identify the client. > -- > Chris Burdess Different URLs do not imply different resources. However, in the above case, the different URLs happen to point to different resources which share the same queue resource. So, the queue is not personalised, but the access (the dequeuing process in this case) to the queue is. The need to identify a client to a server can be fulfilled in various ways. One of them is by giving each client its own URL, and another is by equating the authentication token and the client ID. I happen to like giving each consumer client its own URL because it disassociates client and its authentication token. This means a monitoring client can peek (does the GET, but not the DELETE) at various consumer clients's pending messages, and the sysadmin can tell which requests are made by the monitoring client and which by the consumer client because they would have distinguishable authentication signature. YS.
Paul Winkler wrote: > So now we've got two hits. Bill's proposal has three; so far I still > haven't seen a RESTful way to spell "retrieve a resource from a > collection, but I don't care what the resource's actual URL is, I just > want the oldest one in the collection". (Or newest if it's a stack > rather than a queue.) There isn't a way to say that (with HTTP at least). That's why you need to at least coordinate on a data structure describing the collection. It's one reason why modeling a queue in HTTP is tricker than it seems; (ie, POP is awkward to implement). After looking at this problem for half a decade, I would always try to reduce the problem to modeling lists and not queues if I could get away with it. HTTPLR has 3 steps because that's what allows a client and server to reach agreement in an asymmetric protocol. In theory because of the asymmetry they need an infinite number of exchanges to reach agreement; in practice you can say that in theory the messages will eventually arrive ;) Other rm protocols, like Biztalk's BTF2.0, tend to end up using pairs of servers and exposing receipt URLS. cheers Bill
Yohanes Santoso wrote: > I happen to like giving each consumer client its own URL because it > disassociates client and its authentication token. This means a > monitoring client can peek (does the GET, but not the DELETE) at > various consumer clients's pending messages, and the sysadmin can tell > which requests are made by the monitoring client and which by the > consumer client because they would have distinguishable authentication > signature. 3rd party monitoring was my number one operations use case for HTTPLR. In the case where clients are supposed to "take" messages as opposed to "read" them, giving everyone their own URL is the most natural fit on HTTP. Anything else gets to be a headache. cheers Bill
Mark Baker wrote: > > > How about a TAKE method? Nice idea for unreliable delivery. To support guaranteed once and only once, you still need a step to verify you GOT the message. It can't be done in one request; thems the maths :\ cheers Bill
How about LEASE with a timeout, followed by DELETE? If the client goes *poof*, the timeout would cleanup reservation and return the message to the queue. John On 6/14/07, Bill de hOra <bill@...> wrote: > Mark Baker wrote: > > > > > > How about a TAKE method? > > Nice idea for unreliable delivery. To support guaranteed once and only > once, you still need a step to verify you GOT the message. It can't be > done in one request; thems the maths :\ > > cheers > Bill > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
Also, the LEASE should return a Content-Location where the lease itself is exposed as a Resource. John On 6/14/07, John D. Heintz <jheintz@...> wrote: > How about LEASE with a timeout, followed by DELETE? > > If the client goes *poof*, the timeout would cleanup reservation and > return the message to the queue. > > John > > On 6/14/07, Bill de hOra <bill@...> wrote: > > Mark Baker wrote: > > > > > > > > > How about a TAKE method? > > > > Nice idea for unreliable delivery. To support guaranteed once and only > > once, you still need a step to verify you GOT the message. It can't be > > done in one request; thems the maths :\ > > > > cheers > > Bill > > > > > > > > Yahoo! Groups Links > > > > > > > > > > > -- > John D. Heintz > Principal Consultant > New Aspects of Software > Austin, TX > (512) 633-1198 > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
TAKE, LEASE ... sounds like this discussion has Changed-Location to [discuss-things-other-than-rest] ;-) Alan
John D. Heintz wrote: > > > How about LEASE with a timeout, followed by DELETE? > > If the client goes *poof*, the timeout would cleanup reservation and > return the message to the queue. My problem with a lease it that it would make it harder to say formal things about a protocol that used it. cheers Bill
Bill, You certainly could be right, but I don't understand how having a time-bounded lease would make systems any harder to formally describe (given systems can get disconnected anyway). I'm not very well-versed in the literature on how/why leases would complicate this more, references appreciated. Recently I've been reading Mark Mc Keown et. el. paper on HARC and co-allocation (http://www.allhands.org.uk/2006/proceedings/papers/624.pdf) and then also the Promises in (http://www-db.cs.wisc.edu/cidr/cidr2007/papers/cidr07p36.pdf). My random ideas here are just me trying find the underlying concept (intuitively) from the papers I read and two obvervations: * reserving things (like a hotel room, or spot to get my hair cut) is a successful and very old practice. * I haven't found a distributed system with direct support for multiple readers (besides MOM which uses transaction) I think in a previous message in this thread I said there might not be an answer, or a viable REST method to fill this gap. It might just be a problem that must always be solved explicitly by the client and the server; without infrastructure support. John On 6/14/07, Bill de hOra <bill@...> wrote: > John D. Heintz wrote: > > > > > > How about LEASE with a timeout, followed by DELETE? > > > > If the client goes *poof*, the timeout would cleanup reservation and > > return the message to the queue. > > My problem with a lease it that it would make it harder to say formal > things about a protocol that used it. > > cheers > Bill > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
On Wed, Jun 13, 2007 at 03:47:52PM +0100, Alan Dean wrote: > Don't feel bad about it: Yeah, that seems to be the concensus. I won't :) I love mailing lists where I can fire off an idle question during a break and then spend days digesting the voluminous responses. Thanks everyone. -- Paul Winkler http://www.slinkp.com
I think we are miscommunicating on the phrase 'many readers'. From my perspective, the web is pretty good at having many readers of a resource. I think what you are talking about is having a single /consumer/ of a message (for a queue). I also don't see how invoking a new method would help, if we can't even describe how that new method would operate. Interestingly, for publish-subscribe messaging (using a topic), there are many readers as well as many consumers - that's much easier build in the REST style. (see http://www.topiczero.com:8080/xmlrouter/) > -----Original Message----- > From: jheintz@... [mailto:jheintz@...] On Behalf > Of John D. Heintz > Sent: Wednesday, June 13, 2007 8:40 PM > To: Mike Dierken > Cc: Mark Baker; rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: Message queues > > I'm not trying to specify how many writers, it shouldn't > matter. I'm thinking about a primitive operation that > supports readers. > > One writer alone could drive hundreds of readers. Just to pick an > example: one writer pushes positive integers and many readers > check for prime numbers. Not a very useful example on it's > own, but that single writer could drive many, many readers. > > This thread (and the reliable messaging proposals for REST) > have addressed many issues (like network failures. Perhaps > ordered messaging isn't covered already. > > I don't know if a single generalized primitive operation is > good enough to address this issue. This seems like a more > variable problem than "many writers". > > One of the things I've been thinging about is how to > re-implement the EIP patterns in RESTful systems, like > Competing Consumers > http://www.enterpriseintegrationpatterns.com/CompetingConsumers.html > > This seems like a tough problem without another method. I > haven't found a really good example of what to base that on > though. Reading the GFS paper I realized that the append > operation on GFS files supports many writers, but GFS doesn't > have an operation for many readers. > > John Heintz > > On 6/13/07, Mike Dierken <dierken@...> wrote: > > > Outside of MOM I don't know of any generalized support > for many readers. > > > > You mean, many readers (of a single resource/message) but only one > > consumer/processor of that one resource? > > > > > > > -- > John D. Heintz > Principal Consultant > New Aspects of Software > Austin, TX > (512) 633-1198
On 14/06/2007 00:59, Ian Davis wrote: > On 13/06/2007 15:23, Paul Winkler wrote: >> So now we've got two hits. Bill's proposal has three; so far I still >> haven't seen a RESTful way to spell "retrieve a resource from a >> collection, but I don't care what the resource's actual URL is, I just >> want the oldest one in the collection". (Or newest if it's a stack >> rather than a queue. > > What about a having a URI representing the end of the queue, e.g. > > http://example.com/queue/last > > A DELETE on that URI redirects with a 303 to the current last item in > the queue which is removed and returned in the body. > In retrospect this isn't a great solution since 303 requires the redirect to be followed with a GET rather than a DELETE Ian -- work - http://www.talis.com/platform play - http://iandavis.com/blog callto:ian_davis
* Mike Schinkel <mikeschinkel@...> [2007-06-09 22:45]: > Thanks, you've just artfully illustrated my point! Say "URL > Construction" and the RESTians stick their fingers in their > ears and scream "I don’t hear you" while humming very loudly... > ;-) > > IOW, the phrase "URL construction" it is a trigger that causes > most REST advocates to immediately become defensive rather than > to be willing to explore how to achieve both benefits of > hypermedia *and* URL construction. It's kinda similar to the > feelings that the phrase "amnesty" evokes in certain people > here in the US right now. ;-) Your harping on this matter and your triumphalist “a-ha! gotcha!” responses are getting tiresome, I have to say. Please quit making up controversy that doesn’t exist. The matter is very simple: if the URI is constructed according to promises published in a resource by the server, then it is hypermedia. If the URI is constructed based on a priori client knowledge of the server URI space then it is not hypermedia. Fin. > >If the server communicates a URI template and how it should be > >used then it IS hypermedia. > > Your acknowledgement here is begrudging rather than > revelational. No, his acknowledgement is a Rorschach test. Your interpretation of it as begrudged is merely a reflection of your own bias. > It's my belief we need to stop being afraid of URI construction > as a phrase and instead look for how to achieve most or all of > its benefits without causing harm to the REST architecture. That’s very simple: if the URI is constructed based on hypermedia, then the application is RESTful. If not, it’s not. I don’t know what we need to talk about, here. > If RESTians really want to promote the use of the hypermedia > constraint they need to catalyze the creation of tools that > make it brain-dead easy program hypermedia in a generic sense. No doubt. > >A good test of this is whether it can deal with a change of > >URIs in the path, host, scheme and query string portion > >(moving information between each of these parts I'd consider a > >plus but not a vital necessity). > > Heh. Any and every REST system will fail that test. After all, > how do you change the entry point URL? '-) Not by changing the code, hopefully. In fact, you could make that URI a link in a resource you control, in which case the REST client would in fact not need to change *at all*. See? We could split hairs about the exceptional case of the entry point all day long. :-) What matters is you know what Jon meant; and you do. > >That it doesn't directly relate to what people are thinking > >about when they think about media types aimed at other uses > >(particularly web services) > > That's one falicy RESTians often commit; assuming that everyone > that is trying to understand REST is steeped in SOAP or RPC. > Many web developers (myself included) had never used SOAP or > RPC so for us this is unnecessary complication. I realised this today. The reason I have been harping on the hypermedia constraint lately is that it is the only part of REST that I hadn’t already understood. Contrary to those coming from a SOAP/WS-* background, I “grew up” with HTTP and it never quite occured to me how many more differences between RPC style and REST style exist, and more importantly that the REST way of doing things with respect to those differences isn’t necessarily an obvious and natural approach. > Rather than go round and round on this, will you at least agree > that most people need more guidance than just being told: "any > web page with links and forms is an example?" Was anyone seriously saying that this is sufficient guidance? I didn’t think anyone was claiming that. Of *course* it’s not enough as a send-off for people building apps. > >Really hypermedia is almost too simple that some people > >(focusing on other concerns, especially if their background > >makes RPC or other non-RESTful solutions seem more obvious) > >can't change gears. > > Maybe the problem for many that can't change gears is that the > debates are too often framed on an abstract "good" vs. "bad" > turning it into a religious debate rather than providing > examples that illustrate the benefits of REST and that speak > for themselves? I think it is simply a matter of the lessons not having sunk in enough. A lot of people are still learning, so they say things that omit or misinterpret large parts of the style, or focus on irrelevant or orthogonal issues, etc. Certainly I’ve been noticing a lot of that since I’ve had my lightbulb moment. But I think this matter will straighten itself out over time as more people absorb the lessons and apply it in practice, coming away with examples from experience. It’s just the natural process of adoption. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Yep I was trying to resolve the concurrency problem in using a GET
followed by a DELETE but the solution wasn't very good ;)
After some reflexion I wonder if the simplest way is not to use a
resource into models the queue of each consumer.
The POP operation will consist then to POST the next available entry of
a pubic queue in its personal processing queue.
A consumer must subscribe to the queue service. The service will create
new resources to model this consumer :
/consumer/{c_id} : the consumer
/consumer/{c_id}/queue : queue of entries processed by the consumer
...
If a consumer want POP the next entry of a public queue it will send a
POST request on its processing queue indicating the URI of the next
entry in a public queue (/queue/{q_id}/next).
I don't know what is the best way to build this HTTP request.
A consumer may POP several entries before to process them.
It may POP entries from different public queues.
Into consume its personal queue, the consumer may use GET + DELETE
without worried about synchronization.
-- benoit fleury
Andrzej Jan Taramina a �crit :
> Benoit:
>
>> Yep I forgot the idempotency requirement of PUT. You're right.
>> Thanks.
>>
>> So use POST ? Or is this new resource a bad idea ?
>
> POST would work, since it lets you collapse the GET and DELETE into a single
> operation, but I suppose it really depends on what problem you are trying to
> solve.
>
> Sounded like you were trying to resolve the concurrency problem in using a
> GET followed by a DELETE, to pop an entry. That is, you GET the next queue
> entry, but before you can issue a DELETE, someone else has blown it away. Is
> that correct?
>
> Concurrency collisions can be a tough problem in REST. I'm surprised that
> this hasn't been discussed on the list that much.
>
>
> Andrzej Jan Taramina
> Chaeron Corporation: Enterprise System Solutions
> http://www.chaeron.com
>
>
I saw this entry on Pat Helland's blog and wondered if using a 'ledger' style could be used to make the queue work: -> "start from an empty queue" HEAD /foo Accept: text/plain <- 200 OK Content-Type: text/plain ETag: "" -> "push a new item onto the queue" POST /foo If-Match: "" Content-Type: application/x-www-form-urlencoded Accept: text/plain push=123 <- 303 See Other Location: http://example.com/foo/0 -> GET /foo Accept: text/plain <- 200 OK Content-Type: text/plain ETag: "whsysfg" 123 -> "push a second item onto the queue" POST /foo If-Match: "whsysfg" Content-Type: application/x-www-form-urlencoded push=456 <- 303 See Other Location: http://example.com/foo/0 -> GET /foo Accept: text/plain <- 200 OK Content-Type: text/plain ETag: "iostard" 456 123 -> "peek at the topmost queue item" GET /foo/0 Accept: text/plain <- 200 OK Content-Type: text/plain 456 -> "pop the topmost queue item" POST /foo If-Match: "iostard" Content-Type: application/x-www-form-urlencoded Accept: text/plain pop <- 200 OK Content-Type: text/plain 456 -> GET /foo Accept: text/plain <- 200 OK Content-Type: text/plain ETag: "iostard" 123 This means that all GET / HEAD requests have no side-effects. Enqueuing and dequeuing are only carried out using POST. You could still choose to overwrite the whole queue with a PUT or remove all items with a DELETE. POST has the following characteristics that permit the usage shown above: "The action performed by the POST method might not result in a resource that can be identified by a URI." "Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields." http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5 Together these mean that it is ok to return a queue item that has been dequeued by the POST - and by not setting any cache control response headers, you can be sure that intermediaries won't keep a copy. Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
I'll resend this as it seems to have bounced off the list.. On 14 Jun 2007, at 14:57, Stian Soiland wrote: > On 12 Jun 2007, at 08:33, Mike Schinkel wrote: > >> John Panzer wrote: >> > (1) Is it fine for a resource's bits to change per user, if >> > it's defined in a user-relative way? So, is "Current >> > Location of requesting user" a valid resource retrievable via >> > HTTP? I think the answer is a noncontroversial yes, but >> > wanted to double check before asking the followup: >> >> My two cents is "no." If one needed a generic URL I think it should >> redirect to a specific one. Not sure what status code. But I would >> also be >> interested in hearing any counter arguments if there are any... > > In previous discussions [1] we ended up with some kind of > conclusion that a generic URI can be the 'current user' in that it > will always temporarily non-cached varied redirect to the user's > URI. For example, I've done it like this: > > > GET / > > 200 OK > Content-Type: text/xml > <capabilities> > <users xlink:href="/users" /> <!-- This is where to POST if > registering a new user --> > <currentUser xlink:href="/users;current" /> <!-- Redirects to > the home of the authenticated user --> > </capabilities> > > > > GET (or even HEAD) /users;current will require authentication, and > on success will be something along the lines of: > > > HEAD /users;current > Authorization: (basic: stain:****) > > 307 Temporary redirect > Location: /users/stain > Vary: Authorization > Cache-Control: private > > The key here is that the redirect claims Vary: Authorization > because the redirect varies depending on the authorization header. > In addition it tries kindly to ask non-Vary-aware proxies not to > cache, which they shouldn't do with a temporary redirect anyway. > > The only downfall is that really this should be quite cacheable if > the authorisation is always the same. > > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/8464 > > -- > Stian Soiland, myGrid team > School of Computer Science > The University of Manchester > http://www.cs.man.ac.uk/~ssoiland/ > > -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
Ian Davis wrote: > In retrospect this isn't a great solution since 303 requires the > redirect to be followed with a GET rather than a DELETE Well, strictly it's a SHOULD rather than a MUST. Of course SHOULD's mean you do it unless you have a darn good reason not to AND have considered all possible consequences, but... If we consider the fully compliant route as follows: 1. Client Request. 2. Server Response 303. 3. Client Request GET 4. Server Response 200 (Client now knows about resource of queue item and that it can be deleted). 5. Client Request DELETE Then we can skip points 3 and 4 if we have a way of communicating to the client the information it needs to know it can DELETE without it doing a GET and without violating REST and HTTP. Not sure if we can communicate that, but maybe someone will have an idea.
John D. Heintz wrote: > Bill, > > You certainly could be right, but I don't understand how having a > time-bounded lease would make systems any harder to formally describe > (given systems can get disconnected anyway). That's a very good point; I hadn't thought of modeling a lease expiry as a failure. And I should say that in the case where a client 'sends' a message in HTTPLR, you can have what I called 'phantoms' which are open exchanges that won't complete because the client has for that exchange "gone away" and isn't coming back. I punted on that on the basis that (roughly) because you can GET the exchange, an admin could do something about these, but it's a heuristic as to a) whether the client is coming back, b) how to do it on a given server. A lease arguably automates that by putting a constraint on the client to complete in a given timeframe. cheers Bill
Josh Sled wrote: > "Alan Dean" <alan.dean@...> writes: >> TAKE, LEASE ... sounds like this discussion has Changed-Location to >> [discuss-things-other-than-rest] > > No, just things other than HTTP/1.1. > > There's not just 4 operations in the world... we just haven't found the right > ones, yet. No, but increasing methods is painful compared to increasing headers, which is painful compared to increasing document features. I'm still thinking POST might still be the best way to do this in practice.
Hi all, On my blog I have written a tutorial-style entry on how Marc Hadley's WADL (Web Application Description Language) together with my "REST Describe & Compile" can be used to generate client code in PHP5, Java, Python, and Ruby. The example uses Yahoo's Inbound Links API. Both the generated code (PHP 5 in the tutorial) and the application can be tried out directly on the page. http://blog.tomayac.de/index.php?date=2007-06-14&time=15:06:52&perma=REST+Describe+%26+Comp.html What do you think? Looking forward to hearing back from you. Cheers, Tom PS: two more links... 1) the WADL I have used: http://blog.tomayac.de/images/Yahoo!_Inbound_Links_API.wadl.xml 2) REST Describe & Compile: http://tomayac.de/rest-describe/latest/RestDescribe.html -- Thomas Steiner http://blog.tomayac.de mailto:tomac AT google DOT com
[ Attachment content not displayed ]
Benoît Fleury wrote: > > > I have two questions about this scenario. > > 1. 'somequeue/msg/42?clientID=me' > > Does this resource exist ? It doesn't matter. Resources are inaccessible. > Is it the same resource than 'somequeue/msg/42' ? > I think no, so it seems weird to PUT no representation on an inexistent > URI ? Is to PUT its ID (in a representation) on the existing resource > 'somequeue/msg/42' a better solution ? > > 2. The second question is not a question :). I'm not comfortable to tell > to the consumer to retry on the next available entry resource if the PUT > failed (409 Conflict). It may increase the number of HTTP request if > there is a lot of concurrent customers. > > The only solution I seen into avoid the retry is to use a single POST to > assign an entry to a consumer. Don't pre-optimise. Figure out how to express the behavior you want correctly first. cheers Bill
2007/6/15, Bill de hOra <bill@dehora.net>: > > > > > > > Benoît Fleury wrote: > > > > > > I have two questions about this scenario. > > > > 1. 'somequeue/msg/42?clientID=me' > > > > Does this resource exist ? > > It doesn't matter. Resources are inaccessible. I'm not sure to understand :) > > > > > > > > Is it the same resource than 'somequeue/msg/42' ? > > I think no, so it seems weird to PUT no representation on an inexistent > > URI ? Is to PUT its ID (in a representation) on the existing resource > > 'somequeue/msg/42' a better solution ? > > > > 2. The second question is not a question :). I'm not comfortable to tell > > to the consumer to retry on the next available entry resource if the PUT > > failed (409 Conflict). It may increase the number of HTTP request if > > there is a lot of concurrent customers. > > > > The only solution I seen into avoid the retry is to use a single POST to > > assign an entry to a consumer. > > Don't pre-optimise. Figure out how to express the behavior you want > correctly first. Totally agree with you but here I want to choose between two solutions. - use GET and PUT on an entry until you get the lock on it - or use POST only one time to get the lock on the next available entry Shouldn't performance, simplicity of the implementation or other consequence of a design choice be examined ? -- benoit > > cheers > Bill > > Messages in this topic (0) Reply (via web post) | Start a new topic > Messages | Members > > > Change settings via the Web (Yahoo! ID required) > Change settings via email: Switch delivery to Daily Digest | Switch format to Traditional > Visit Your Group | Yahoo! Groups Terms of Use | Unsubscribe
"Benoît Fleury" <benoit.fleury@...> writes: > Totally agree with you but here I want to choose between two solutions. > - use GET and PUT on an entry until you get the lock on it > - or use POST only one time to get the lock on the next available > entry Why only those two? The first one degrades quickly as the number of consumers increase. I assume the second one entails POSTing to a next-entry URL as illustrated in your other email. The response of such POSTing is the URL to the popped entry. If there is a problem while reading the response (network connection is interrupted or client killed, for example), then there is no way to retrieve the entry URL returned earlier. Why not give each consumer its own access URL? With that a POP is always done in two HTTP requests, a GET followed by a DELETE, unless there is a network or client or server problem. Then you can repeat the GET as many times as you want until the problem goes away. YS.
A. Pagaltzis wrote: > * Mike Schinkel <mikeschinkel@...> [2007-06-09 22:45]: > > Thanks, you've just artfully illustrated my point! Say "URL > > Construction" and the RESTians stick their fingers in their > ears and > > scream "I don't hear you" while humming very loudly... > > ;-) > > > > IOW, the phrase "URL construction" it is a trigger that causes most > > REST advocates to immediately become defensive rather than to be > > willing to explore how to achieve both benefits of hypermedia *and* > > URL construction. It's kinda similar to the feelings that > the phrase > > "amnesty" evokes in certain people here in the US right now. ;-) > > Your harping on this matter and your triumphalist "a-ha! gotcha!" > responses are getting tiresome, I have to say. Please quit > making up controversy that doesn't exist. And *your* harping is not tiresome?!? Just because you don't appreciate my position doesn't give you moral authority to elevate yours over mine. And please, let's avoid the ad-hominems because it doesn't do this list good to devolve to that. > The matter is very simple: if the URI is constructed > according to promises published in a resource by the server, > then it is hypermedia. If the URI is constructed based on a > priori client knowledge of the server URI space then it is > not hypermedia. Fin. > > > >If the server communicates a URI template and how it > should be used > > >then it IS hypermedia. > > > > Your acknowledgement here is begrudging rather than revelational. > > No, his acknowledgement is a Rorschach test. Your > interpretation of it as begrudged is merely a reflection of > your own bias. I guess I'm the only one on this list whose comments are reflecting a bia? '-) > > It's my belief we need to stop being afraid of URI > construction as a > > phrase and instead look for how to achieve most or all of > its benefits > > without causing harm to the REST architecture. > > That's very simple: if the URI is constructed based on > hypermedia, then the application is RESTful. If not, it's not. But it is a point that is not even addressed by many when discussing with people who are learning REST, and those newly christened RESTians go forth and preach the dogma that URIs cannot be constructed, period. My "tiresome" comments are meant to shine a light on the issue to discourage even more furture cargo-cultists. > > >A good test of this is whether it can deal with a change > of URIs in > > >the path, host, scheme and query string portion (moving > information > > >between each of these parts I'd consider a plus but not a vital > > >necessity). > > > > Heh. Any and every REST system will fail that test. After > all, how do > > you change the entry point URL? '-) > > Not by changing the code, hopefully. > > In fact, you could make that URI a link in a resource you > control, in which case the REST client would in fact not need > to change *at all*. > > See? We could split hairs about the exceptional case of the > entry point all day long. :-) > > What matters is you know what Jon meant; and you do. That's a marginal case on the open Internet. Publishing an API for others to consume ensures that this will almost certainly not be the case. My bringing it up was NOT splitting hairs, it was to make the point that REST is not pure as physics is pure, and that there are edge case problems with the hypermedia constraint. As every REST system could theoretically be composed to make a larger REST system, the once published entry point now becomes verboten to be constructed. And the converse is also true, that many REST services could be decomposed into smaller independent services. When the decomposition occurs, what is the entry point? Is it a constructed URL, or did you have to follow hypermedia from the larger service to get to it? And if that larger service is then composed with yet more services, where are the valid entry points that don't require hypermedia? Thus I see a problem with the hypermedia constraint because it does not scale upwards or downwards. While I see it's theoretical benefit, I see problems in its real world use as just described and that's why I think it is so important to actively encourage the incorporation of URL composition into the mix at all levels. By encouraging URL construction using templates, REST services will be more easiler able to scale albeit there will still be edge problems but less so. Services would compose URLs based on templates, but where the template comes from is the edge case yet that can easily be provided by the larger service when services are composed. As is, services faithfully the following hypermedia constraint are ironically brittle with respect any changes involving composition or decomposition. What's more, assuming an arbitrary Internet-published REST-based API, it is much easier to program a direct resource retrieval using URI composition than it is to program a hypermedia-following resource retrieval, partly because there are absolutely no standards for such discovery and retrieval leaving the hapless developer or entrepreneur to code it themselves. For a great developers it is trivial, but for many smaller businesses or entrepreneurs w/o hotshot developers on staff it is not. So a company publishing a web API can either tell it's potential users to follow the pure REST hypermedia model, or do URI construction. And if they do the former, they are likely to get a lot more people using it. Which would you chose? If you say hypermedia, I can tell we are discussing a hypothetical question and not one on which your livelihood depends. Finally, for an open API published on the web, I am almost willing to argue that textually publishing the URL format and encouraging people to do URI construction w/o hypermedia is okay assuming the company is willing to maintain those URLs. After all, I can't see any reason why Amazon couldn't commit itself to maintain its services at http://services.amazon.com/ where to get info on one of the items they sell you would just append their "ASIN" to the end of their "items" URL http://services.amazon.com/items/1234567890/ There are many things in life where companies need to put a stake in the ground and then maintain that stake, i.e. car makers have to maintain spare parts for their cars for many years, I see no reason why it should be absolutely forbidden for companies to publish REST apis for the open Internet that do not require hypermedia to discover and parse. The hypermedia constraint is simply the web's example of the more general abstraction and indirection pattern used to improve maintainability of systems throughout software development. But as experience has shown us, too much abstraction and too much indirection make for too much complexity, and that pill can at times be worse than the ailment it attempts to cure. I know this as I have often tried to over-generalize a system only to find I'd made it too complex to work with. Sometimes it is better to simply hardcode something than to make it too complex. And I'd argue that plublished open apis on the Internet could well be a valid place where URLs could be reasonably hardcoded. There, I said the heresey. Let the burning at the stake begin... My guess is that you deal with internal systems a lot more than you deal with the open internet. Maybe that's why your bias differs from mine. > > >That it doesn't directly relate to what people are thinking about > > >when they think about media types aimed at other uses > (particularly > > >web services) > > > > That's one falicy RESTians often commit; assuming that > everyone that > > is trying to understand REST is steeped in SOAP or RPC. > > Many web developers (myself included) had never used SOAP or RPC so > > for us this is unnecessary complication. > > I realised this today. The reason I have been harping on the > hypermedia constraint lately is that it is the only part of > REST that I hadn't already understood. Ah. There is rarely a man as zealous as the newly converted. And what was that you said about "bias?" '-) > > Rather than go round and round on this, will you at least > agree that > > most people need more guidance than just being told: "any > > web page with links and forms is an example?" > > Was anyone seriously saying that this is sufficient guidance? Yes. > I didn't think anyone was claiming that. Not in the past week, there wasn't. Go back to October of last year when I was learning about REST. I'd dig it up but Google doesn't seem to be indexing Yahoo groups well... > But I think this matter will straighten itself out over time > as more people absorb the lessons and apply it in practice, > coming away with examples from experience. > > It's just the natural process of adoption. ...and as people like me, and you, have these *tiresome* debates. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
[ Attachment content not displayed ]
* Mike Schinkel <mikeschinkel@...> [2007-06-15 20:55]: > A. Pagaltzis wrote: > > That's very simple: if the URI is constructed based on > > hypermedia, then the application is RESTful. If not, it's > > not. > > But it is a point that is not even addressed by many when > discussing with people who are learning REST, and those newly > christened RESTians go forth and preach the dogma that URIs > cannot be constructed, period. My "tiresome" comments are meant > to shine a light on the issue to discourage even more furture > cargo-cultists. You mean you assume that Jon doesn’t know what REST means and doesn’t mean? > > > Heh. Any and every REST system will fail that test. After > > > all, how do you change the entry point URL? '-) > > > > Not by changing the code, hopefully. > > > > In fact, you could make that URI a link in a resource you > > control, in which case the REST client would in fact not need > > to change *at all*. > > > > See? We could split hairs about the exceptional case of the > > entry point all day long. :-) > > > > What matters is you know what Jon meant; and you do. > > That's a marginal case on the open Internet. Publishing an API > for others to consume ensures that this will almost certainly > not be the case. My bringing it up was NOT splitting hairs, it > was to make the point that REST is not pure as physics is pure, > and that there are edge case problems with the hypermedia > constraint. As every REST system could theoretically be > composed to make a larger REST system, the once published entry > point now becomes verboten to be constructed. Uh, nowhere did I admit that it needs to be constructed. The client needs to receive *some* entry point URI out of band. This is called a “bookmark.” Why would the client ever *construct* one? So since your premise looks false to me… > And the converse is also true, that many REST services could be > decomposed into smaller independent services. When the > decomposition occurs, what is the entry point? Is it a > constructed URL, or did you have to follow hypermedia from the > larger service to get to it? And if that larger service is then > composed with yet more services, where are the valid entry > points that don't require hypermedia? Thus I see a problem with > the hypermedia constraint because it does not scale upwards or > downwards. … then necessarily I must consider your conclusion false too. > While I see it's theoretical benefit, I see problems in its > real world use as just described and that's why I think it is > so important to actively encourage the incorporation of URL > composition into the mix at all levels. By encouraging URL > construction using templates, REST services will be more > easiler able to scale albeit there will still be edge problems > but less so. No, they will just be easier to create, but harder to maintain, because they’ll be stronger coupled – unless the URI template comes from hypermedia. Additionally, non-hypermedia based URI construction has no scalibility effect in the small and a second- order negative one in the large. > Services would compose URLs based on templates, but where the > template comes from is the edge case yet that can easily be > provided by the larger service when services are composed. As > is, services faithfully the following hypermedia constraint are > ironically brittle with respect any changes involving > composition or decomposition. I really can’t follow that conclusion. > What's more, assuming an arbitrary Internet-published > REST-based API, it is much easier to program a direct resource > retrieval using URI composition than it is to program a > hypermedia-following resource retrieval, partly because there > are absolutely no standards for such discovery and retrieval > leaving the hapless developer or entrepreneur to code it > themselves. That’s why AtomPP is such a huge deal. > For a great developers it is trivial, but for many smaller > businesses or entrepreneurs w/o hotshot developers on staff it > is not. So a company publishing a web API can either tell it's > potential users to follow the pure REST hypermedia model, or do > URI construction. And if they do the former, they are likely to > get a lot more people using it. Which would you chose? If you > say hypermedia, I can tell we are discussing a hypothetical > question and not one on which your livelihood depends. Seeing as I’m primarily a Perl hacker, I’ll just point to WWW::Mechanize for this matter. Proof’s in eating the pudding. Doing hypermedia is very easy given tooling that abstracts the rote work. I’m basing this on actual experience, not hypothesis, much as you’d like to paint the REST proponents with the ivory tower brush. > Finally, for an open API published on the web, I am almost > willing to argue that textually publishing the URL format and > encouraging people to do URI construction w/o hypermedia is > okay assuming the company is willing to maintain those URLs. > After all, I can't see any reason why Amazon couldn't commit > itself to maintain its services at http://services.amazon.com/ > where to get info on one of the items they sell you would just > append their "ASIN" to the end of their "items" URL > http://services.amazon.com/items/1234567890/ That’s fine for Amazon. It’s not so fine at the other end of the wire, because then the other end of the wire is an Amazon client as opposed to a web shop client. Of course Amazon has no incentive to care about that. > There are many things in life where companies need to put a > stake in the ground and then maintain that stake, i.e. car > makers have to maintain spare parts for their cars for many > years, I see no reason why it should be absolutely forbidden > for companies to publish REST apis for the open Internet that > do not require hypermedia to discover and parse. Imagine if every car company had their own designs for screws, nuts, bolts, lightbulbs, batteries, tires, etc. complete with car-maker-specific screwdrivers, rechargers, tire inflators etc., with a stated promise that production of these parts would be maintained indefinitely. > The hypermedia constraint is simply the web's example of the > more general abstraction and indirection pattern used to > improve maintainability of systems throughout software > development. But as experience has shown us, too much > abstraction and too much indirection make for too much > complexity, and that pill can at times be worse than the > ailment it attempts to cure. Smalltalk is based on an indirection at the core of the language semantics level, and practice has since shown that extremely late-bound messaging communication leads to much more flexible and resilient systems than are possible with static early binding. > I know this as I have often tried to over-generalize a system > only to find I'd made it too complex to work with. And you found no cases where too little indirection made things too hard? Beware of confirmation bias. > Sometimes it is better to simply hardcode something than to > make it too complex. And I'd argue that plublished open apis on > the Internet could well be a valid place where URLs could be > reasonably hardcoded. If that’s the case, then AtomPP, which is machine-readable hypermedia writ large, will crash and burn. I’ll let history be the judge of that, but I think I can already tell what history will have had to say about this one. > My guess is that you deal with internal systems a lot more than > you deal with the open internet. Maybe that's why your bias > differs from mine. Funny you should say that, as the SOAP/WS-* philosophy with its early binding/tight coupling/code gen mindset comes from internal systems rather than the open web. And no, I don’t deal much at all with internal systems. Don’t you think I’d believe much more in tools if that were the case? > > But I think this matter will straighten itself out over time > > as more people absorb the lessons and apply it in practice, > > coming away with examples from experience. > > > > It's just the natural process of adoption. > > ...and as people like me, and you, have these *tiresome* > debates. What I found tiresome is not the debate but rather your desire to be be controverted, leading you to incessantly make up sentiments like “begrudging” out of thin air. I’m not here for an interest in claims about each other’s supposed belief systems. As for my own bias, I’ll note that my lightbulb moment regarding hypermedia was just two weeks or so ago and I’m since realigning my understanding of REST as a whole already. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Karen" == Karen <karen.cravens@...> writes:
Karen> 1. Authentication. Far as I can tell, all the options that
Karen> are both RESTful and secure require something more than
Karen> vanilla Web 0.1
Absolutely untrue, http://www.pobox.com/~berend/rest/authentication.html
Karen> 2. That darn "there's something beyond GET and POST?"
Karen> thing. I'm pretty much stuck with overloaded POST, I
Karen> think.
You are, because of browser incompatibilities. You don't face such
issues if you use a decent HTTP library.
Karen> Are there any better workarounds for a dumb browser with
Karen> these issues? Having perused the phalanger book and all
Karen> the blogs/wikis/articles I could find, these seem to be The
Karen> Big Two, but are there any similar issues I haven't thought
Karen> of ?
Nope.
- --
Live long and prosper,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGcwMCIyuuaiRyjTYRAnkJAJ40/tgcITfDm9UnSNDzyJzXCyvs6wCfVAq8
vIWeVKEz5UQkwhwyjLl8ups=
=MJr7
-----END PGP SIGNATURE-----
* Mike Schinkel <mikeschinkel@...> [2007-06-10 01:15]: > Jon Hanna wrote: > > Mike Schinkel wrote: > > >> From what I understood of what Mike was saying - I think > > >> that is what he meant. > > > > Then we've just been using terms differently. I don't call > > that URI construction, I call it hypermedia. > > Ah, different meanings for the same terms. The crux of more > debates, disagreements, and wars than anything besides > differing values or battles for scarce resources. FWIW, I hadn’t seen this post until just now. (Catching up with the list; I only had time to read part of the thread last night. The volume here can be overwhelming at times.) I agree fully with Jon here. Hypermedia doesn’t mean you can’t compose URIs. It just means you can’t know what URIs to construct without consulting the server for a description of their structure. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
[ Attachment content not displayed ]
Karen wrote: > (After spending a couple of days sifting through the archives, I've come > to the conclusion that some members this group have a moderate-to-severe > allergy to discussions of actual implementation (and the concomitant > compromises), so if that's you, you may want to grab some antihistimines > now.) Spare me. If you want to specifically accuse someone of handwaving, then do so. > 1. Authentication. Far as I can tell, all the options that are both > RESTful and secure require something more than vanilla Web 0.1, and so > I'm stuck with Apache Basic Auth or something cookie-based. I think I'm > going to go with the latter, because being able to log out is pretty > important, I think. Tentatively, I'd go with a scheme wherein there's a > login form with a username/password, host returns a username/session > token. A bit RESTless, but nothing beyond the token itself is stored in > the server-side session, so I'm thinking the damage to RESTfulness is > minimal. So is security, but it's better than transmitting the password > in the clear every time. Slightly. Use cookies; abstract your login code out of the view so that it can be reused when some a non-browser client wants to authenticate. If you want to see a system that does cookies + www-authenticate, look at Zope2/Plone. There's plenty of implementation in there. When you do log out, be sure to reset the cookie, clear down any session state, and redirect the user off the logout. That's so you don't get to log in via the back button. Most frameworks I've seem expect you to do this yourself. Try not to log users out with GET (if you must show a link, at least trap it and call POST via javascript). > 2. That darn "there's something beyond GET and POST?" thing. I'm pretty > much stuck with overloaded POST, I think. Happily, the application I'm > writing is naturally heavier on true POSTs than PUTs, at least. And I'm > not at all clear on what horrible things can happen (again, in > *practice*) with overloaded POST, at least in an environment where dumb > browser is constrained (barring malice/stupidity) by the forms fed to it > by a server aware of its dumbness. Live with POST, understand the implications of overloading. Mostly, the implication is that frameworks (and standards) that derive from the worldview that everything's a form or a CGI suck. They are optimized for you to write stupid code by default. To be clear, that's a lot of frameworks, and a lot of specs. If you've only ever worked inside such frameworks and to such standards (eg if your experience is limited to RPC WS-* stacks and/or Struts action controllers), you might think it's fine in the same way Blub programmers think Blub is fine. You can workaround HTML forms limitations using Javascript: http://www.mnot.net/javascript/json_form.js cheers Bill
* 'A. Pagaltzis' <pagaltzis@...> [2007-06-15 23:15]: > * Mike Schinkel <mikeschinkel@...> [2007-06-15 20:55]: > > For a great developers it is trivial, but for many smaller > > businesses or entrepreneurs w/o hotshot developers on staff > > it is not. So a company publishing a web API can either tell > > it's potential users to follow the pure REST hypermedia > > model, or do URI construction. And if they do the former, > > they are likely to get a lot more people using it. Which > > would you chose? If you say hypermedia, I can tell we are > > discussing a hypothetical question and not one on which your > > livelihood depends. > > Seeing as I’m primarily a Perl hacker, I’ll just point to > WWW::Mechanize for this matter. Proof’s in eating the pudding. > Doing hypermedia is very easy given tooling that abstracts the > rote work. I’m basing this on actual experience, not > hypothesis, much as you’d like to paint the REST proponents > with the ivory tower brush. To expand on this point: saying “it’s easy for great developers to write hypermedia clients” sounds to me like the following would if the clock were turned back 15 years: “it’s easy for great developers to implement HTTP clients”. So imagine the clock having turned forward a few years and then consider the argument that hypermedia is hard to program to again. I am not rolling over into a “tools will save us” stance here. The great thing about HTTP is that while no one *needs* to implement it from scratch to get work done, anyone who thinks they must, *can*, because it’s simple enough to admit that possibility. Hypermedia has the same simplicity story; it’s easy to roll tooling for it and in due time the market will have consolidate on Good Enough existing tooling so that people won’t have to, much like people generally use Apache for their server and don’t have to think about it – although some do, and some write alternatives that manage to break into the market such a lighttpd. This is exactly what I expect to see in the future. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
2007/6/15, Yohanes Santoso <yahoo-rest-discuss@microjet.ath.cx>:
>
>
>
>
>
>
> "Benoît Fleury" <benoit.fleury@gmail.com> writes:
>
> > Totally agree with you but here I want to choose between two solutions.
> > - use GET and PUT on an entry until you get the lock on it
> > - or use POST only one time to get the lock on the next available
> > entry
>
> Why only those two? The first one degrades quickly as the number of
> consumers increase. I assume the second one entails POSTing to a
> next-entry URL as illustrated in your other email. The response of
> such POSTing is the URL to the popped entry. If there is a problem
> while reading the response (network connection is interrupted or
> client killed, for example), then there is no way to retrieve the
> entry URL returned earlier.
My proposition was to POST the URI of the next entry of a public queue
(/queue/{q_id}/next) on the URI of the personal consumer processing
queue (/consumer/{c_id}/queue) so the client may check its own queue
in case of a network failure into verify if a new entry has been added
or if the server did not register its POP action (and retry it).
>
> Why not give each consumer its own access URL? With that a POP is
> always done in two HTTP requests, a GET followed by a DELETE, unless
> there is a network or client or server problem. Then you can repeat
> the GET as many times as you want until the problem goes away.
>
Not sure to understand this protocol ...
-- benoit
> YS.
>
Your right, I meant to say consumer, not reader. I'm exploring the idea of a RESTful message queue with many processing agents consuming messages from the queue. * each message should go to only one consumer * messages shouldn't be lost if a consumer goes away Other posts (like Benoit Fleury's idea of per-consumer queues) are probably the most sensible way to solve this in a RESTful way - without trying to add new HTTP methods. It seems odd to me that there is such good generalized support for multiple producer scenarios (POST(a) to a resource is exactly this), but no support for multiple consumers. I'll check out the xmlrouter project you referenced, I don't think I've seen that before. Thanks, John Heintz On 6/14/07, Mike Dierken <dierken@...> wrote: > I think we are miscommunicating on the phrase 'many readers'. > From my perspective, the web is pretty good at having many readers of a > resource. > I think what you are talking about is having a single /consumer/ of a > message (for a queue). > I also don't see how invoking a new method would help, if we can't even > describe how that new method would operate. > > Interestingly, for publish-subscribe messaging (using a topic), there are > many readers as well as many consumers - that's much easier build in the > REST style. (see http://www.topiczero.com:8080/xmlrouter/) > > > > > > -----Original Message----- > > From: jheintz@... [mailto:jheintz@...] On Behalf > > Of John D. Heintz > > Sent: Wednesday, June 13, 2007 8:40 PM > > To: Mike Dierken > > Cc: Mark Baker; rest-discuss@yahoogroups.com > > Subject: Re: [rest-discuss] Re: Message queues > > > > I'm not trying to specify how many writers, it shouldn't > > matter. I'm thinking about a primitive operation that > > supports readers. > > > > One writer alone could drive hundreds of readers. Just to pick an > > example: one writer pushes positive integers and many readers > > check for prime numbers. Not a very useful example on it's > > own, but that single writer could drive many, many readers. > > > > This thread (and the reliable messaging proposals for REST) > > have addressed many issues (like network failures. Perhaps > > ordered messaging isn't covered already. > > > > I don't know if a single generalized primitive operation is > > good enough to address this issue. This seems like a more > > variable problem than "many writers". > > > > One of the things I've been thinging about is how to > > re-implement the EIP patterns in RESTful systems, like > > Competing Consumers > > http://www.enterpriseintegrationpatterns.com/CompetingConsumers.html > > > > This seems like a tough problem without another method. I > > haven't found a really good example of what to base that on > > though. Reading the GFS paper I realized that the append > > operation on GFS files supports many writers, but GFS doesn't > > have an operation for many readers. > > > > John Heintz > > > > On 6/13/07, Mike Dierken <dierken@...> wrote: > > > > Outside of MOM I don't know of any generalized support > > for many readers. > > > > > > You mean, many readers (of a single resource/message) but only one > > > consumer/processor of that one resource? > > > > > > > > > > > > -- > > John D. Heintz > > Principal Consultant > > New Aspects of Software > > Austin, TX > > (512) 633-1198 > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
On 6/15/07, Bill de hOra <bill@...> wrote: > Spare me. If you want to specifically accuse someone of handwaving, then > do so. I was going for more of an "I don't want to start something" preface, and I figured naming names would be more along the things of an "I want to start something... with you, you, and YOU" preface. If I was unclear, I do apologize. > Use cookies; abstract your login code out of the view so that it can be > reused when some a non-browser client wants to authenticate. Or even when a smarter browser does, right. > If you want to see a system that does cookies + www-authenticate, look > at Zope2/Plone. There's plenty of implementation in there. Thanks, I'll look at that. > When you do log out, be sure to reset the cookie, clear down any session > state, and redirect the user off the logout. That's so you don't get to > log in via the back button. Most frameworks I've seem expect you to do > this yourself. Try not to log users out with GET (if you must show a > link, at least trap it and call POST via javascript). Login and -out are both POSTs, tentatively. And yeah, if there's Javascript, I shouldn't have to use cookies (barring any showstopping "but then the interface changes too drastically" issues). > Live with POST, understand the implications of overloading. Mostly, the > implication is that frameworks (and standards) that derive from the > worldview that everything's a form or a CGI suck. They are optimized for > you to write stupid code by default. To be clear, that's a lot of > frameworks, and a lot of specs. If you've only ever worked inside such > frameworks and to such standards (eg if your experience is limited to > RPC WS-* stacks and/or Struts action controllers), you might think it's > fine in the same way Blub programmers think Blub is fine. I'm more of an "Ah ha, *that's* what I've never liked any of those frameworks" person. I could never put a finger on it, until (and now I can't remember how) I stumbled across a REST article, and hunted down more, and the more I read, the more I went "Yeah, that's *exactly* what I've been doing - only incompletely and hampered by the feeling that I ought to somehow be being more RPC/MVC/etc.-ish. If that makes any sense. > You can workaround HTML forms limitations using Javascript: Yes, thanks. I do plan to make use of that at the first opportunity.
Karen wrote: > (After spending a couple of days sifting through the archives, I've come to the conclusion that some members this group have a moderate-to-severe allergy to discussions of actual implementation (and the concomitant compromises), so if that's you, you may want to grab some antihistimines now.) I think that's more that for many issues the lower-level details of implementation are trivial once you've got the higher-level matters worked out. Take the message queue thread. Lots of competing proposals, but all of them are simple-matter-of-coding once you've decided to do them. There are a lot of threads where we are talking about ideals. Talking about ideals has its place. You can't compromise unless you know what you are compromising between (the ideal and the possible). > 1. Authentication. Far as I can tell, all the options that are both RESTful and secure require something more than vanilla Web 0.1, and so I'm stuck with Apache Basic Auth or something cookie-based. Digest? > I think I'm going to go with the latter, because being able to log out is pretty important, I think. You can do that with Basic or Digest. Would be nice if we didn't have to do the hacky trick that's necessary with almost every browser (Lynx is the only one I can think of that has a proper log-out option by default). > Tentatively, I'd go with a scheme wherein there's a login form with a username/password, host returns a username/session token. A bit RESTless, but nothing beyond the token itself is stored in the server-side session, so I'm thinking the damage to RESTfulness is minimal. Pretty heavy damage in most cases. This is a big scalability crippler in many systems. > So is security, but it's better than transmitting the password in the clear every time. Slightly. Security with these schemes is at best dependant upon you getting every right (assuming yet another problem with cookie security isn't found). The only reason for using cookie-based security is one is busy and lots of toolkits come with built-in support. It's perfectly reasonable for a developer to make that call IMO, but pretty crappy of the toolkits to have that flaw. I blame the toolkits, but not the developer. > 2. That darn "there's something beyond GET and POST?" thing. I'm pretty much stuck with overloaded POST, I think. Happily, the application I'm writing is naturally heavier on true POSTs than PUTs, at least. And I'm not at all clear on what horrible things can happen (again, in *practice*) with overloaded POST, at least in an environment where dumb browser is constrained (barring malice/stupidity) by the forms fed to it by a server aware of its dumbness. POST for everything is suboptimal rather than wrong. > I'll save the "And now I want to implement REST via SMTP!" My first question on that score is "why?"
(This is me replying off-list-then-on-list to Jon's off-list-then-on-list reply, and probably thoroughly confusing any threading in headers. Curse you, Chip Rosenthal!) On 6/15/07, Jon Hanna < jon@...> wrote: > There are a lot of threads where we are talking about ideals. Talking > about ideals has its place. You can't compromise unless you know what > you are compromising between (the ideal and the possible). Right. I'm trying to compromise only where I have to. Figure it'll save trouble down the line. > Digest? Um... I looked at it, and I can't remember now why it wouldn't work down at the dumb browser (and dumb web server, as I just re-emphasized in my previous post) level. What Would Lynx Do? > You can do that with Basic or Digest. Would be nice if we didn't have to > do the hacky trick that's necessary with almost every browser (Lynx is > the only one I can think of that has a proper log-out option by default). You know, I didn't know you could log out with Lynx. > Pretty heavy damage in most cases. This is a big scalability crippler in > many systems. Really? It's a database-intensive app already, so in a high-usage situation you'd already be dealing with multiple servers, and the session database would be pretty trivial in comparison to other stuff. 99% of the time, auth wouldn't be required so the server would be ignoring the cookies and not making the lookup. Plus, Javascript would hijack the non-dumb browser and go to a properly-RESTful authen mode, so it's only going to be happening for the small percentage of people without JS. > Security with these schemes is at best dependant upon you getting every > right (assuming yet another problem with cookie security isn't found). Yeah, it worries me a bit. I wish I knew either a lot more, or a lot less about security. > The only reason for using cookie-based security is one is busy and lots > of toolkits come with built-in support. It's perfectly reasonable for a > developer to make that call IMO, but pretty crappy of the toolkits to > have that flaw. I blame the toolkits, but not the developer. I think of it as a kludge, like overloaded POST, not to be used unless there's nothing better. (And I started this thread because I really want to be *sure* there's nothing better.) > My first question on that score is "why?" Because it's there! (Seriously? I'm not serious. Other than a lot of the functionality of the app will also be accessible by email, and probably some of the decisions on how *that* works will be influenced by the architecture of the web side of it.)
Karen wrote: > (This is me replying off-list-then-on-list to Jon's > off-list-then-on-list reply, and probably thoroughly confusing any > threading in headers. Curse you, Chip Rosenthal!) And this is hopefully me replying on-list as intended this time. Sorry about the earlier error. >> Digest? > > Um... I looked at it, and I can't remember now why it wouldn't work > down at the dumb browser (and dumb web server, as I just re-emphasized > in my previous post) level. What browser doesn't support Digest? I know that dealing with this can be tricky with some small-budget hosts, but not all. > What Would Lynx Do? Support it out of the box. Best support around last time I looked (all the others have a design flaw in not having a log-out button, though I think there's a firefox extension that does that). > Really? It's a database-intensive app already, so in a high-usage > situation you'd already be dealing with multiple servers, and the > session database would be pretty trivial in comparison to other stuff. It means that you have n resources multiplied by m sessions to create n*m resources with an n*m management task. Since sessions are generally not something you can reuse safely m is infinite. > 99% of the time, auth wouldn't be required so the server would be > ignoring the cookies and not making the lookup. Stop the cookies actually being sent at all in these cases and we can make that part scale. > Plus, Javascript would > hijack the non-dumb browser and go to a properly-RESTful authen mode, > so it's only going to be happening for the small percentage of people > without JS. With my current pet project I'm currently split on whether I do this or not. The only advantage in doing this is that some people don't like the pop-ups browsers use for auth - so if I decide to use js to push past that I might do the above. Then again I personally do like the pop-ups, since they tend to be more secure than cookie-based sessions (if Digest rather than Basic). >> Security with these schemes is at best dependant upon you getting every >> right (assuming yet another problem with cookie security isn't found). > > Yeah, it worries me a bit. I wish I knew either a lot more, or a lot > less about security. You *really* can't code securely if you don't get the security issues at hand very well. Time for some research. > I think of it as a kludge, like overloaded POST, not to be used unless > there's nothing better. (And I started this thread because I really > want to be *sure* there's nothing better.) There's plenty better, but I use sessions all the time when I need stuff done quickly because in many cases I can use the "Session object" in a given tool kit and do in 5 minutes what takes a large amount of code to do. I plan to build myself the tools that'll give me that same 5-minute-to-working-code advantage with other methods though. If I ever do get it done I'll release it. > (Seriously? I'm not serious. Other than a lot of the functionality of > the app will also be accessible by email, and probably some of the > decisions on how *that* works will be influenced by the architecture > of the web side of it.) I think I would probably think of that in a very separate way. Can't say more without knowing more; and even then maybe a whiteboard, some good coffee and regular smoke-breaks would also help.
On 6/15/07, Jon Hanna <jon@...> wrote: > What browser doesn't support Digest? > Support it out of the box. Best support around last time I looked (all > the others have a design flaw in not having a log-out button, though I > think there's a firefox extension that does that). I went and refreshed my memory... that was an Apache requirement problem. > It means that you have n resources multiplied by m sessions to create > n*m resources with an n*m management task. > Since sessions are generally not something you can reuse safely m is > infinite. Possibly "session" is the wrong term. My current placeholder is that the browser fills out a username+password form, and the server responds with a cookie containing the username and a "session" token that's an MD5 digest. The server stores the username, token, and timestamp, and eventually decides that the timestamp is too old. So I'm not understanding the "n resources" part... you mean over the course of the user's visit? > Stop the cookies actually being sent at all in these cases and we can > make that part scale. You lost me again... the browser keeps sending the cookies (back) and the server only verifies them when authen is actually required. And only the dumb browsers have cookies. > You *really* can't code securely if you don't get the security issues at > hand very well. Time for some research. Always, yes. I know a fair amount about security, but some of it is "I know enough to know there's much I don't know." > There's plenty better, but I use sessions all the time when I need stuff > done quickly because in many cases I can use the "Session object" in a > given tool kit and do in 5 minutes what takes a large amount of code to do. Aside from the state (or perhaps "state") of "authenticated or not," I don't have any state I need to deal with, I think. > I think I would probably think of that in a very separate way. Can't say > more without knowing more; and even then maybe a whiteboard, some good > coffee and regular smoke-breaks would also help. We're planning to be at YAPC in Houston in a couple of weeks...
Well, the Web evolved as a read/write medium for people, not as a work delegation medium for software applications. It turns out that the solution that emerged on the Web was one of state transfer of representations of resources and I'm guessing that the Web's approach is fundamentally useful to both queues and topics for software coordination. But it's important to note that the Web /evolved/. It wasn't designed. However it was improved based on a keen eye, but this is why I suggested people work on actual implementations that can be evaluated against the expectations. The key expectation here is that there is only one consumer. The next most important part is that the message should be lost in the face of a failed consumer. Build something or describe something, then measure it against this and we'll start seeing progress. As for xmlrouter, that's just a little project I poke at occasionally which was an offshoot of my work at KnowNow many ages ago. If anybody knows erlang and has some spare time and would like to change the world, give me call. Mike > -----Original Message----- > From: jheintz@... [mailto:jheintz@...] On Behalf > Of John D. Heintz > Sent: Friday, June 15, 2007 4:16 PM > To: Mike Dierken > Cc: Mark Baker; rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: Message queues > > Your right, I meant to say consumer, not reader. > > I'm exploring the idea of a RESTful message queue with many > processing agents consuming messages from the queue. > * each message should go to only one consumer > * messages shouldn't be lost if a consumer goes away > > Other posts (like Benoit Fleury's idea of per-consumer > queues) are probably the most sensible way to solve this in a > RESTful way - without trying to add new HTTP methods. > > It seems odd to me that there is such good generalized > support for multiple producer scenarios (POST(a) to a > resource is exactly this), but no support for multiple consumers. > > I'll check out the xmlrouter project you referenced, I don't > think I've seen that before. > > Thanks, > John Heintz > > On 6/14/07, Mike Dierken <dierken@...> wrote: > > I think we are miscommunicating on the phrase 'many readers'. > > From my perspective, the web is pretty good at having many > readers of > > a resource. > > I think what you are talking about is having a single > /consumer/ of a > > message (for a queue). > > I also don't see how invoking a new method would help, if we can't > > even describe how that new method would operate. > > > > Interestingly, for publish-subscribe messaging (using a > topic), there > > are many readers as well as many consumers - that's much > easier build > > in the REST style. (see http://www.topiczero.com:8080/xmlrouter/) > > > > > > > > > > > -----Original Message----- > > > From: jheintz@... [mailto:jheintz@...] On > Behalf Of John > > > D. Heintz > > > Sent: Wednesday, June 13, 2007 8:40 PM > > > To: Mike Dierken > > > Cc: Mark Baker; rest-discuss@yahoogroups.com > > > Subject: Re: [rest-discuss] Re: Message queues > > > > > > I'm not trying to specify how many writers, it shouldn't > matter. I'm > > > thinking about a primitive operation that supports readers. > > > > > > One writer alone could drive hundreds of readers. Just to pick an > > > example: one writer pushes positive integers and many > readers check > > > for prime numbers. Not a very useful example on it's own, > but that > > > single writer could drive many, many readers. > > > > > > This thread (and the reliable messaging proposals for REST) have > > > addressed many issues (like network failures. Perhaps ordered > > > messaging isn't covered already. > > > > > > I don't know if a single generalized primitive operation is good > > > enough to address this issue. This seems like a more variable > > > problem than "many writers". > > > > > > One of the things I've been thinging about is how to re-implement > > > the EIP patterns in RESTful systems, like Competing Consumers > > > > http://www.enterpriseintegrationpatterns.com/CompetingConsumers.html > > > > > > This seems like a tough problem without another method. I haven't > > > found a really good example of what to base that on > though. Reading > > > the GFS paper I realized that the append operation on GFS files > > > supports many writers, but GFS doesn't have an operation for many > > > readers. > > > > > > John Heintz > > > > > > On 6/13/07, Mike Dierken <dierken@...> wrote: > > > > > Outside of MOM I don't know of any generalized support > > > for many readers. > > > > > > > > You mean, many readers (of a single resource/message) > but only one > > > > consumer/processor of that one resource? > > > > > > > > > > > > > > > > > -- > > > John D. Heintz > > > Principal Consultant > > > New Aspects of Software > > > Austin, TX > > > (512) 633-1198 > > > > > > > -- > John D. Heintz > Principal Consultant > New Aspects of Software > Austin, TX > (512) 633-1198
* Jon Hanna <jon@...> [2007-06-16 03:20]: > It means that you have n resources multiplied by m sessions to > create n*m resources with an n*m management task. > > Since sessions are generally not something you can reuse safely > m is infinite. I don’t really get this part. If all and I mean *all* I’m using a cookie for is to store an auth token, then how is this less scalable than Digest? I see no reason that scalability should differ dependening on whether you look up an auth token found in the cookie header or a nonce to validate the hash found in the auth header. The cost seems to have to be identical in both cases. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
http://www.infoq.com/news/2007/06/entity-services Confusing (or confused) discussion. I can't figure out if they are saying CRUD is bad, or if they are arguing against a resource-oriented (noun-oriented) design and would prefer a method or verb oriented design. I suspect the latter.
I've discovered one more problem with my "dumb browser, host-restricted Apache" scenario - I'm pretty sure it can be solved by re-framing it, but I haven't *quite* got there. Suppose, for a moment, we have a nicely RESTful web interface to a Usenet server - the interface behaves as the user's newsreader. Among other things, it has to maintain a list of read messages which it does, as is common, with a newsrc file. A run list is maintained for each newsgroup. Resources are pretty obvious... user, newsgroup, article. We can do a PUT to the article(to which each user gets his "own" URL) modifying the flag that indicates whether it's been read, and the app can update the newsrc accordingly, and life is good. At least, as long as we want to mark just one article at a time. Dumb browser can't be told "fire off five PUT requests to mark this group of five articles," and we sure as heck don't want the user to click five "mark as read" buttons, with dumb browser repainting the whole page in between. Even smart applications reeeeally don't want to send a thousand or so PUT requests when the user says "catch up" on a new newsgroup. So if we make the newsrc a resource, we can spoon-feed dumb browser a form at the bottom of the page we server, that includes the newsrc as it would look like with the five articles marked read, and if the user wants to mark them read, dumb browser can PUT the preconstructed form to the newsrc, and the server's RESTfully happy. So far, so good. But suppose Joe User's been paging through the articles. He's got a page with articles 1-5, a page with 6-10, a page with 11-15. And then he decides to mark them read. Dumb browser sends (PUTs, or at least POST-overloadeds) the page 3 newsrc run list: 11-15. Server's happy. Joe pages back to the second page, and marks those read. Dumb browser says "PUT that newsrc to '6-10'" Now, we don't want to serve Joe a 409 Conflict, and we don't want to unmark articles 11-15, and we don't want to make it impossible for a smarter application to say "mark 6-10 read, unmark 11-15" all at once on purpose. It's not too difficult to cater to dumb browser and put something in the form that determines whether to union, xor, diff, or intersect the PUT run list with the server's existing run list, but I'm wondering if there's an approach that eliminates the problem altogether.
http://dotnetwithme.blogspot.com/2007/06/rest-vs-rest.html > You have come to the right place : > - if you have never heard about the architecture style called 'REST'. > - You have heard about it but don't know anything about it. ...and you don't want to know anything about it when your done. Brandon
Bill de hOra wrote: > Karen wrote: > > > (After spending a couple of days sifting through the archives, I've > > come to the conclusion that some members this group have a > > moderate-to-severe allergy to discussions of actual implementation > > (and the concomitant compromises), so if that's you, you > > may want to grab some antihistimines now.) > > Spare me. If you want to specifically accuse someone of > handwaving, then do so. Sorry Bill, I gotta support Karen on that one. Maybe what she said is untrue, but someone new to the list definitely gets the impression she got... > > 1. Authentication. Far as I can tell, all the options > > that are both RESTful and secure require something more > > than vanilla Web 0.1, and so I'm stuck with Apache > > Basic Auth or something cookie-based. I think I'm going > > to go with the latter, because being able to log out is > > pretty important, I think. Tentatively, I'd go with a > > scheme wherein there's a login form with a > > username/password, host returns a username/session > > token. A bit RESTless, but nothing beyond the token > > itself is stored in the server-side session, so I'm > > thinking the damage to RESTfulness is minimal. So is > > security, but it's better than transmitting the > > password in the clear every time. Slightly > > Use cookies; abstract your login code out of the view so > that it can be reused when some a non-browser client > wants to authenticate. > > If you want to see a system that does cookies + > www-authenticate, look at Zope2/Plone. There's plenty of > implementation in there. Can you give some more specific examples? Zope/Plone are a rather large and large apps don't make for good example code unless a person has a week or two to install and get to know the architecture first. > Try not to log users out with GET (if you must show a > link, at least trap it and call POST via javascript). Why not? (honest question, trying to learn.) > > 2. That darn "there's something beyond GET and POST?" > > thing. I'm pretty much stuck with overloaded POST, I > > think. Happily, the application I'm writing is > > naturally heavier on true POSTs than PUTs, at least. > > And I'm not at all clear on what horrible things can > > happen (again, in *practice*) with overloaded POST, at > > least in an environment where dumb browser is > > constrained (barring malice/stupidity) by the forms fed > > to it by a server aware of its dumbness > Live with POST, understand the implications of > overloading. Mostly, the implication is that frameworks > (and standards) that derive from the worldview that > everything's a form or a CGI suck. They are optimized for > you to write stupid code by default. To be clear, that's > a lot of frameworks, and a lot of specs. If you've only > ever worked inside such frameworks and to such standards > (eg if your experience is limited to RPC WS-* stacks > and/or Struts action controllers), you might think it's > fine in the same way Blub programmers think Blub is fine. Why do they suck? (again, honest question, trying to learn.) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On Jun 17, 2007, at 3:38 PM, Karen wrote: > I've discovered one more problem with my "dumb browser, > host-restricted Apache" scenario - I'm pretty sure it can be solved by > re-framing it, but I haven't *quite* got there. > > Suppose, for a moment, we have a nicely RESTful web interface to a > Usenet server - the interface behaves as the user's newsreader. Among > other things, it has to maintain a list of read messages which it > does, as is common, with a newsrc file. A run list is maintained for > each newsgroup. > > Resources are pretty obvious... user, newsgroup, article. We can do a > PUT to the article(to which each user gets his "own" URL) modifying > the flag that indicates whether it's been read, and the app can update > the newsrc accordingly, and life is good. > > At least, as long as we want to mark just one article at a time. Dumb > browser can't be told "fire off five PUT requests to mark this group > of five articles," and we sure as heck don't want the user to click > five "mark as read" buttons, with dumb browser repainting the whole > page in between. Even smart applications reeeeally don't want to send > a thousand or so PUT requests when the user says "catch up" on a new > newsgroup. > > So if we make the newsrc a resource, we can spoon-feed dumb browser a > form at the bottom of the page we server, that includes the newsrc as > it would look like with the five articles marked read, and if the user > wants to mark them read, dumb browser can PUT the preconstructed form > to the newsrc, and the server's RESTfully happy. > > So far, so good. But suppose Joe User's been paging through the > articles. He's got a page with articles 1-5, a page with 6-10, a page > with 11-15. And then he decides to mark them read. Dumb browser sends > (PUTs, or at least POST-overloadeds) the page 3 newsrc run list: > 11-15. Server's happy. Joe pages back to the second page, and marks > those read. Dumb browser says "PUT that newsrc to '6-10'" > > Now, we don't want to serve Joe a 409 Conflict, and we don't want to > unmark articles 11-15, and we don't want to make it impossible for a > smarter application to say "mark 6-10 read, unmark 11-15" all at once > on purpose. It's not too difficult to cater to dumb browser and put > something in the form that determines whether to union, xor, diff, or > intersect the PUT run list with the server's existing run list, but > I'm wondering if there's an approach that eliminates the problem > altogether. Something like this was discussed a few weeks ago . If I recall correctly, the consensus was that PUT did not have to contain the entire resource representation, but could just contain affected items. Thus, if PUT contained an XML document (or equivalent thereof) that looks as follows: <status> <item id="10" mark="read" /> <item id="11" mark="read" /> <item id="12" mark="read" /> <item id="13" mark="read" /> <item id="14" mark="read" /> </status> The server could apply the changes to only the supplied items. That is, item 1-9 and 15+ would remain unaffected. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Karen wrote:
> Suppose, for a moment, we have a nicely RESTful web
> interface to a Usenet server - the interface behaves as
> the user's newsreader. Among other things, it has to
> maintain a list of read messages which it does, as is
> common, with a newsrc file. A run list is maintained for
> each newsgroup.
>
> <snip>
>
I'm not sure I completely followed your dillema, but I want to propose a
potentially RESTful solution and see if it makes sense to you as well as the
others here (doing this helps me learn...)
To reiterate, on each page you've got five messages so you've got messages
1-5, 6-10, 11-15, on pages 1, 2, 3, respectively.
Why not create a resource for "read status" like so:
http://example.com/{newsgroup}/read-status/messages-{n}-to-{n+5}/
The resource's representation could be defined as such:
<Messages>
<ReadStatus message_id="{message_id}">{yes_or_no}</ReadStatus>
...
</Messages>
Do a GET using Javascript to:
http://example.com/whatever/read-status/messages-101-to-105/
And your response is:
<Messages>
<ReadStatus message_id="101">no</ReadStatus>
<ReadStatus message_id="102">no</ReadStatus>
<ReadStatus message_id="103">no</ReadStatus>
<ReadStatus message_id="104">no</ReadStatus>
<ReadStatus message_id="105">no</ReadStatus>
</Messages>
Modify the resource to indicate each has been read then PUT using Javascript
to the same URL:
<Messages>
<ReadStatus message_id="101">yes</ReadStatus>
<ReadStatus message_id="102">yes</ReadStatus>
<ReadStatus message_id="103">yes</ReadStatus>
<ReadStatus message_id="104">yes</ReadStatus>
<ReadStatus message_id="105">yes</ReadStatus>
</Messages>
Does this solve your problem? To the others on the list, is this a good
RESTful solution (thanks in advance)?
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
Why not just use POST for partial update style operations? What would break? > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Steve Bjorg > Sent: Sunday, June 17, 2007 10:07 PM > To: Karen > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] "Partial" PUT - a RESTful run list? > > On Jun 17, 2007, at 3:38 PM, Karen wrote: > > > I've discovered one more problem with my "dumb browser, > > host-restricted Apache" scenario - I'm pretty sure it can > be solved by > > re-framing it, but I haven't *quite* got there. > > > > Suppose, for a moment, we have a nicely RESTful web interface to a > > Usenet server - the interface behaves as the user's > newsreader. Among > > other things, it has to maintain a list of read messages which it > > does, as is common, with a newsrc file. A run list is > maintained for > > each newsgroup. > > > > Resources are pretty obvious... user, newsgroup, article. > We can do a > > PUT to the article(to which each user gets his "own" URL) modifying > > the flag that indicates whether it's been read, and the app > can update > > the newsrc accordingly, and life is good. > > > > At least, as long as we want to mark just one article at a > time. Dumb > > browser can't be told "fire off five PUT requests to mark > this group > > of five articles," and we sure as heck don't want the user to click > > five "mark as read" buttons, with dumb browser repainting the whole > > page in between. Even smart applications reeeeally don't > want to send > > a thousand or so PUT requests when the user says "catch up" > on a new > > newsgroup. > > > > So if we make the newsrc a resource, we can spoon-feed dumb > browser a > > form at the bottom of the page we server, that includes the > newsrc as > > it would look like with the five articles marked read, and > if the user > > wants to mark them read, dumb browser can PUT the > preconstructed form > > to the newsrc, and the server's RESTfully happy. > > > > So far, so good. But suppose Joe User's been paging through the > > articles. He's got a page with articles 1-5, a page with > 6-10, a page > > with 11-15. And then he decides to mark them read. Dumb > browser sends > > (PUTs, or at least POST-overloadeds) the page 3 newsrc run list: > > 11-15. Server's happy. Joe pages back to the second page, and marks > > those read. Dumb browser says "PUT that newsrc to '6-10'" > > > > Now, we don't want to serve Joe a 409 Conflict, and we > don't want to > > unmark articles 11-15, and we don't want to make it > impossible for a > > smarter application to say "mark 6-10 read, unmark 11-15" > all at once > > on purpose. It's not too difficult to cater to dumb browser and put > > something in the form that determines whether to union, > xor, diff, or > > intersect the PUT run list with the server's existing run list, but > > I'm wondering if there's an approach that eliminates the problem > > altogether. > Something like this was discussed a few weeks ago . If I > recall correctly, the consensus was that PUT did not have to > contain the entire resource representation, but could just > contain affected items. > > Thus, if PUT contained an XML document (or equivalent > thereof) that looks as follows: > <status> > <item id="10" mark="read" /> > <item id="11" mark="read" /> > <item id="12" mark="read" /> > <item id="13" mark="read" /> > <item id="14" mark="read" /> > </status> > > The server could apply the changes to only the supplied > items. That is, item 1-9 and 15+ would remain unaffected. > > > - Steve > > -------------- > Steve G. Bjorg > http://www.mindtouch.com > http://www.opengarden.org > > > > > > Yahoo! Groups Links > > >
Mike Dierken wrote: > Why not just use POST for partial update style operations? > What would break? Why not just use PUT? We can use POST for everything, but when another method matches perfectly it's generally best to use that. A representation contains information about a resource. By their very nature representations will often not contain all available information about a resource (producing a representation is conceptually lossy, and needfully so when one representation excludes information available in another). Turn the question around. Can we have a PUT that isn't partial? I think for certain resources that's impossible and for many undesirable.
Or is it REST on CAW? Or perhaps simply OT. What if What if "CAW URI = hash(Content)" The point is that URIs are abstract addresses. The question is how do we associate them with physical (e.g. IP via DNS) addresses? Beyond that how do we associate them with resources? I have no real idea how this would work ... but after 50 years as a programmer I suspect that, if the past is a guide, then another decade will bring things we do not expect today. BobLQ ---------- Forwarded message ---------- From: Brad Collins <brad@...> Date: Jun 17, 2007 11:53 PM Subject: Re: CAW To: Bob La Quey <robert.laquey@...> Just started looking through this. Very interesting indeed. Found this paper as well. http://open-content.net/specs/draft-jchapweske-caw-03.html But as you said -- this will take a lot of thought before it will sink in. "Bob La Quey" <robert.laquey@...> writes: > Beyond IP Adresses one needs a > Content Addressable Web (CAW) > > What might this be? > > Clues are to be found here: > http://www.cs.cornell.edu/People/egs/beehive/ > > This is all going to take some time and deep thought > to sort out. Still BMF is one way to get at looking at > this form an abstract point of view. > > What if CAW Address = hash(Content) > > Then one bases an Internet on these addresses. > > If one has a mapping between CAW Address > and IP Address then one can begin by hosting > a CAW Web on the existing Internet, esp. I6. > > Too much for my old mind, > > BobLQ > -- Brad Collins <brad@...>, Bankwao, Thailand Blog: http://deerpig.blogspot.com For BMF see http://www.idealliance.org/papers/extreme/Proceedings/html/2006/Collins01/EML2006Collins01.html
On 6/17/07, Mike Schinkel <mikeschinkel@...> wrote: > Bill de hOra wrote: > > Live with POST, understand the implications of > > overloading. Mostly, the implication is that frameworks > > (and standards) that derive from the worldview that > > everything's a form or a CGI suck. They are optimized for > > you to write stupid code by default. To be clear, that's > > a lot of frameworks, and a lot of specs. If you've only > > ever worked inside such frameworks and to such standards > > (eg if your experience is limited to RPC WS-* stacks > > and/or Struts action controllers), you might think it's > > fine in the same way Blub programmers think Blub is fine. > > Why do they suck? (again, honest question, trying to learn.) > e.g. JSP and ASP funnel "actions" through one cgi and you pass different parameters on the query string. That's RPC, effectively. It's not modeling your webapp as individually addressable resources. Hugh
> Can you give some more specific examples? Zope/Plone are a rather large and > large apps don't make for good example code unless a person has a week or > two to install and get to know the architecture first. I pulled down the code last night... and authentication stuff can't be buried too deep by its nature (and it's Python, which isn't too hostile to a reading by non-Python programmers). Of course, there's always the problem that that's the bit most likely to be *bypassed* by a demo site, but I imagine I can find somewhere to look at it from the new-user perspective. 'Scool.
On 6/18/07, Jon Hanna <jon@...> wrote: > Turn the question around. Can we have a PUT that isn't partial? I think > for certain resources that's impossible and for many undesirable. Possibly "symmetrical" and "asymmetrical," or words to that effect, might be better than "complete" and "partial"?
Jon Hanna wrote: > > > Mike Dierken wrote: > > Why not just use POST for partial update style operations? > > What would break? > > Why not just use PUT? We can use POST for everything, but when another > method matches perfectly it's generally best to use that. The consequence of mixing up a merge and overwrite operation justify alternate methods imo. Consensus (on atom-syntax at the very least) is PUT is not a patch/merge operator. To distinguish between the two forms of update. You can switch on something else, like a media type (or a type param), but you have to define it per media type as opposed to once in the method (ie it's not uniform for all media). I've always felt the time to use a new methods is when the old method is dangerously close to what you want to do, but incorrect. All this said understanding the cost of introducing new methods. RDF/XML (application/rdf+xml) is interesting input to a gedanken - because of its graph structure it's perfect for clean merging and partial updates, but how does a server know when to merge with the existing data graph and when to replace it? When you solve it using PUT/POST how will you solve it for turtle or RDFa? cheers Bill
> The consequence of mixing up a merge and overwrite operation justify
> alternate methods imo. Consensus (on atom-syntax at the very least) is
> PUT is not a patch/merge operator. To distinguish between the two forms
> of update. You can switch on something else, like a media type (or a
> type param), but you have to define it per media type as opposed to once
> in the method (ie it's not uniform for all media). I've always felt the
> time to use a new methods is when the old method is dangerously close to
> what you want to do, but incorrect. All this said understanding the cost
> of introducing new methods.
Hmm. Good point. I'd like to avoid new methods if I can, just on principle.
I'm debating the possibility of adjusting the granularity so runs are
resources, in which case even dumb browser can deal with a "complete"
PUT. Runs overlap, so there's that to consider, but there shouldn't be
any showstoppers.
That'd made the pages in the example at the top be representations of
a complete resource ("the run 1-5," etc.) instead of partial
representations of a container resource, which ought to only be a good
thing.
Bill de hOra wrote: > > > Jon Hanna wrote: > > > > > > Mike Dierken wrote: > > > Why not just use POST for partial update style operations? > > > What would break? > > > > Why not just use PUT? We can use POST for everything, but when another > > method matches perfectly it's generally best to use that. > > The consequence of mixing up a merge and overwrite operation justify > alternate methods imo. Consensus (on atom-syntax at the very least) is > PUT is not a patch/merge operator. To distinguish between the two forms > of update. You can switch on something else, like a media type (or a > type param), but you have to define it per media type as opposed to once > in the method (ie it's not uniform for all media). I've always felt the > time to use a new methods is when the old method is dangerously close to > what you want to do, but incorrect. All this said understanding the cost > of introducing new methods. > ... OK, once middle ware and intermediates have been fixed to properly handle extension methods (*), what exactly is the cost of introducing a new method? It doesn't seem to be really different from introducing new content types or new HTTP headers. (**) Of course, that's not an excuse to invent lots of them (MKCALENDAR and MKADDRBOOK come to mind), but I think for PATCH it is very clear that it could gain widespread use (next candidate may be LINK). Best regards, Julian (*) which I think already has happened (based on many years of experience supporting the WebDAV stack inside SAP Netweaver) (**) of course there should also be an IANA registry, but that's a todo for RFC2616bis.
Julian Reschke wrote: > OK, once middle ware and intermediates have been fixed to properly > handle extension methods (*), That would be great! > what exactly is the cost of introducing a > new method? The cost of non-uniformity?, he said, clearly dodging the question cheers Bill
> Julian Reschke wrote: > > what exactly is the cost of introducing a > > new method? > > The cost of non-uniformity?, he said, clearly dodging the question I thought the uniformity constraint was supposed to be, the method is potentially applicable to all resources.
On 6/18/07, Karen <karen.cravens@...> wrote: > > Hmm. Good point. I'd like to avoid new methods if I can, just on principle. > More than just good principle: best practise. New HTTP methods dont go through firewalls, and arent supported by all the various http client libs out there. In particular, if javascript in IE/firefox/webkit doesnt support the method, it aint going to be used in AJAX land. If someone were to start a REST design patterns doc, "inventing new verbs" would be the antipattern, one that has just surfaced again in Web3S. I know that its "PATCH" may have appeal, but once you try and have a complete sequence of operations in a single verb, you have to worry about whether the sequence is atomic, what its rollback on failure mode is, whether it is partially visible, etc. WS-RF's bulk get/set had this problem: they didnt spec the atomicity/isolation so you have no way of knowing if you are doing atomic or non atomic ops. Whereas if you have to make a set of operations, well, your expectations are set nice and low. -steve -steve
Hugh Winkler wrote: > Mike Schinkel wrote: > > Bill de hOra wrote: > > > Live with POST, understand the implications of > > > overloading. Mostly, the implication is that > > > frameworks (and standards) that derive from the > > > worldview that everything's a form or a CGI suck. > > > They are optimized for you to write stupid code by > > > default. To be clear, that's a lot of frameworks, and > > > a lot of specs. If you've only ever worked inside > > > such frameworks and to such standards (eg if your > > > experience is limited to RPC WS-* stacks and/or > > > Struts action controllers), you might think it's fine > > > in the same way Blub programmers think Blub is fine. > > > > Why do they suck? (again, honest question, trying to > > learn.) > > e.g. JSP and ASP funnel "actions" through one cgi and > you pass different parameters on the query string. That's > RPC, effectively. It's not modeling your webapp as > individually addressable resources. Ah, I see; thanks. I actually already knew that, but the wording Bill used confused me, i.e. "where everything's a form or a CGI suck." That doesn't tell me that he was talking about using only one URL and passing it verbs aka RPC, and I can't tell if he thinks CGI sucks, or it's sucking data from a CGI URL. For an area where small wording differences generate huge debates it's evidently pretty important that wording be used carefully, i.e. some people might view this as being one URL (if they hadn't studied the RFCs), though it would still be RESTful (right? But I think it is awful URL design): http://api.example.com/service?object=messagelist http://api.example.com/service?object=message&id=123 And we can have multiple URLs that are not RESTful (right?): http://api.example.com/service/getMessageList http://api.example.com/service/saveMessage/123 So making sure terminology is clear, and ideally generally agreed upon, it pretty important. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Steve Loughran wrote: > > > On 6/18/07, Karen <karen.cravens@ gmail.com > <mailto:karen.cravens%40gmail.com>> wrote: > > > > > Hmm. Good point. I'd like to avoid new methods if I can, just on > principle. > > > > More than just good principle: best practise. New HTTP methods dont go > through firewalls, and arent supported by all the various http client That's not my experience. Any examples? > libs out there. In particular, if javascript in IE/firefox/webkit > doesnt support the method, it aint going to be used in AJAX land. IE6 and Firefox support arbitrary methods. The fact that XHR in IE7 doesn't (right now) has been reported (and I think accepted) as a regression. > If someone were to start a REST design patterns doc, "inventing new > verbs" would be the antipattern, one that has just surfaced again in > Web3S. I strongly disagree. > I know that its "PATCH" may have appeal, but once you try and have a > complete sequence of operations in a single verb, you have to worry > about whether the sequence is atomic, what its rollback on failure > mode is, whether it is partially visible, etc. WS-RF's bulk get/set Yes. That's a feature, not a problem. > had this problem: they didnt spec the atomicity/isolation so you have > no way of knowing if you are doing atomic or non atomic ops. Whereas > if you have to make a set of operations, well, your expectations are > set nice and low. I do agree that promising too much atomicity can be a problem. But that only means you need to be careful when defining the method. For instance, I don't see how requiring a PATCH operation to be atomic would be a problem. If all goes wrong, a server still return 5xx. Best regards, Julian
Bill de hOra wrote: > > > Julian Reschke wrote: > > > OK, once middle ware and intermediates have been fixed to properly > > handle extension methods (*), > > That would be great! > > > what exactly is the cost of introducing a > > new method? > > The cost of non-uniformity? , he said, clearly dodging the question The whole point would be so that it *is* uniform. Today, when I submit a MOVE request I can rely on either the server moving the resource, rejecting the request, or telling me it doesn't know what it is. Well, unless the server is completely broken, not checking the method name. That's uniform, isn't. So how do I move an Atom entry between feeds? Best regards, Julian
On 6/18/07, Julian Reschke <julian.reschke@...> wrote: > So how do I move an Atom entry between feeds? PUT an updated entry with the feed information changed? That's how I'd do it, at least.
Karen wrote: > > > On 6/18/07, Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> wrote: > > So how do I move an Atom entry between feeds? > > PUT an updated entry with the feed information changed? > > That's how I'd do it, at least. What feed information? Please clarify... Best regards, Julian
On 6/18/07, Julian Reschke <julian.reschke@...> wrote: > What feed information? Please clarify... Clarification: the question mark means I'm guessing. I'm not referring to anything existing like the APP (which I haven't delved deeply into just yet), just the general case for MOVE. Entry is a resource, part of entry's representation includes the feed's identification. Server gets the entry PUT back to it, sees that the difference is the feed, and so it moves it. Internally it may be nothing like just changing a "feed" field in an "entries" table, but as far as the client is concerned, that's what you want to change about the resource, so that's what it looks like. There's other ways to move things that still don't require a MOVE action to be created. If all else fails, DELETE from the old, POST to the new.
Karen wrote: > On 6/18/07, Julian Reschke <julian.reschke@...> wrote: > > >> What feed information? Please clarify... >> > > Clarification: the question mark means I'm guessing. I'm not > referring to anything existing like the APP (which I haven't delved > deeply into just yet), just the general case for MOVE. > > Entry is a resource, part of entry's representation includes the > feed's identification. Server gets the entry PUT back to it, sees that > the difference is the feed, and so it moves it. Internally it may be > nothing like just changing a "feed" field in an "entries" table, but > as far as the client is concerned, that's what you want to change > about the resource, so that's what it looks like. > > There's other ways to move things that still don't require a MOVE > action to be created. If all else fails, DELETE from the old, POST to > the new. > Note: The AOL Journals REST API provides for changing the URL of a blog in a similar manner: Change a metadata field in the representation PUT by the client which (eventually) gets mapped to the URI used for the blog. Return a new Location: on success. Of course this is one-off per resource type and requires intimate knowledge of the API but the overall pattern is fairly generic. John
* Bob Haugen <bob.haugen@...> [2007-06-17 22:30]: > http://www.infoq.com/news/2007/06/entity-services > > Confusing (or confused) discussion. I can't figure out if they > are saying CRUD is bad, or if they are arguing against a > resource-oriented (noun-oriented) design and would prefer a > method or verb oriented design. > > I suspect the latter. I get the same impression – that they’re saying you should create abstract method interfaces rather than expose the data for direct manipulation: Object Oriented Design 101. Now if only those darn distributed object systems worked… Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi Mike, sorry for the late reply: * Mike Schinkel <mikeschinkel@...> [2007-05-27 21:15]: > > That’s what I think is great about the AtomPP: it provides a > > base for a large array of services by defining a few base > > media types and a number of HTTP transactions complete with > > meanings, granular enough to be useful without customisation, > > but with clear extension hooks throughout. > > To me this is too close to "the one Media Type to rule them > all" (what would we do w/o Tolkien?!?). AtomPP seems to me to > be more of a well-defined conduit for implementing services > than anything that would help identify specific services, but I > haven't been following it closely enough to know for sure. > > So at the risk of having an opinion based on ignorance, I'd say > that AtomPP would be a great base but we still need to be able > to define Webful APIs for specific services, i.e. > > "application/atomserv+xml/events" > > "Event"s here would still be vetted by a working group > somewhere and still need to be registered with IANA. > > How does that sit with you? Not sure. I appreciate the concern, but I’m coming to different conclusions. Looking at `application/atomsvc+xml` tells a client where to look for links, and what they mean (well mostly; there may be extension elements and link relationships it doesn’t know about, but that’s fine). And generic clients for AtomPP are certainly going to be written. (Going to? Have been.) What specific use case the AtomPP server in question is designed for should hopefully not matter – not to the point of needing a new media type anyway. If it differs that much, it won’t be AtomPP anymore and should be designed from scratch. (If it doesn’t differ that much, then describing the difference within the document, such as by using the feature extension, would be enough; intermediaries, f.ex., wouldn’t be affected by the specific use case.) And I do fully expect we will see other generic REST-based protocols, mostly non-overlapping in the class of target use cases. That is fine and I consider it a sign of a healthy ecosystem. From another part of my message that you quoted: > > We need a middle ground: a small variety of somewhat generic > > media types that can be used for a wide variety of things. > > Individual services can then use one of them, and clients can > > then be implemented as glue on top of a library. I think our difference in view is that you see AtomPP as ruling the market, possibly? My expectation is that it will see strong adoption, and maybe even capture a majority of the pie (a big maybe here), however I don’t think it will take all. Therefore I’m not that worried. What might be useful is to have payload media types for common applications, for reuse across REST-based protocols. But that is something I think will happen naturally in a non-coordinated fashion anyway. For now I just want to watch AtomPP and see how things shake out. REST outside the browser hasn’t really been done in anger on the open web yet, and I have little faith in our collective ability to anticipate what will work and what not so much. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> The consequence of mixing up a merge and overwrite operation justify > alternate methods imo. Consensus (on atom-syntax at the very least) is > PUT is not a patch/merge operator. My feeling exactly. I was trying to surface what the goal was - replace or merge. My concern is that if both methods are a 'merge' then were do we put 'replace'? I think that MS's new protocol also uses PUT for partial updates, but I could have read it wrong.
Wow. Sometimes I wonder if this list is becoming the "Anti-POST" club! ;-) For the most part, I feel that what you are doing is a perfect fit for POST. Essentially, you are appending items to a list of read pages aren't you? Seems very POSTy to me. The rule of thumb should be to use the method with the strongest semantics that fits what you are doing. But at the same time, I don't think there's a lot of real benefit from driving yourself crazy to replace POST with PUT/DELETE. The penalty for using semantics that are too strong (e.g. using a GET for an unsafe operation) are really bad. The penalty for using weaker semantics are that you might lose out on allowing intermediaries to optimize or do other handy things with your request (e.g. cache the result or re-apply a request for reliability reasons). Using a POST has weak semantics but shouldn't be considered a bad practice IMO.
Jon Hanna wrote: > Turn the question around. Can we have a PUT that isn't > partial? Unless I misunderstand, the answer should be "no." > I think for certain resources that's impossible and > for many undesirable. If you need a partial PUT, PUT to a resource that represents the partial representation. Isn't the the correctly RESTful way to do it? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
Karen wrote: > > > On 6/18/07, Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> wrote: > > > What feed information? Please clarify... > > Clarification: the question mark means I'm guessing. I'm not > referring to anything existing like the APP (which I haven't delved > deeply into just yet), just the general case for MOVE. > > Entry is a resource, part of entry's representation includes the > feed's identification. Server gets the entry PUT back to it, sees that > the difference is the feed, and so it moves it. Internally it may be > nothing like just changing a "feed" field in an "entries" table, but > as far as the client is concerned, that's what you want to change > about the resource, so that's what it looks like. > > There's other ways to move things that still don't require a MOVE > action to be created. If all else fails, DELETE from the old, POST to > the new. Well, there are a few problems with that. 1) AFAICT, there is no feed information in the entry. Thus, APP would need to add that. Too late. 2) Even if it were there, APP would need to define what it means to edit it for clients to be able to rely on it. 3) And even then, it's not really clear how this would work for entries that are supposed to appear in multiple feeds. So you'd end up with lots of additional specification text to define a functionality that is already defined in a separate related standards track IETF spec. Best regards, Julian
On 6/18/07, Julian Reschke <julian.reschke@...> wrote: > Steve Loughran wrote: > > > > More than just good principle: best practise. New HTTP methods dont go > > through firewalls, and arent supported by all the various http client > > That's not my experience. Any examples? sorry, I meant proxy servers, which are the bane of my life (that and the lack of a centralised proxy server setting on linux and the inability of java clients to automatically determine proxy settings with any reliability). Over HTTP (and not HTTPS), a fair few proxies, especially the old MS ones that some companies still have around, are prone to rejecting verbs they dont like, which can mean the entire pre-DAV set of verbs. > > > libs out there. In particular, if javascript in IE/firefox/webkit > > doesnt support the method, it aint going to be used in AJAX land. > > IE6 and Firefox support arbitrary methods. The fact that XHR in IE7 > doesn't (right now) has been reported (and I think accepted) as a > regression. any timetable for fixing that? > > If someone were to start a REST design patterns doc, "inventing new > > verbs" would be the antipattern, one that has just surfaced again in > > Web3S. > > I strongly disagree. > > > I know that its "PATCH" may have appeal, but once you try and have a > > complete sequence of operations in a single verb, you have to worry > > about whether the sequence is atomic, what its rollback on failure > > mode is, whether it is partially visible, etc. WS-RF's bulk get/set > > Yes. That's a feature, not a problem. > > > had this problem: they didnt spec the atomicity/isolation so you have > > no way of knowing if you are doing atomic or non atomic ops. Whereas > > if you have to make a set of operations, well, your expectations are > > set nice and low. > > I do agree that promising too much atomicity can be a problem. But that > only means you need to be careful when defining the method. For > instance, I don't see how requiring a PATCH operation to be atomic would > be a problem. If all goes wrong, a server still return 5xx. I guess you could have an extra header saying "x-atomicity" or something. The risk is that you, the client, talk to one server and get an atomic experience, and all is well, then you go to another implementation and suddenly you end up with race conditions, concurrency problems, other callers seeing operations in the wrong order, etc. Stuff that shows up when you go live, not in testing. In WSRF there is/was no way to request atomic operations, or any way to determine if an endpoint was going to be atomic or not. Which made the bulk ops useless for atomic ops, and only for saving time over long-haul links. Now, if you look at WebDAV specs, its semantics on the big move/copy ops are effectively defined to be those of the win9x file system, i.e. if the operation stops half way through, the outcome is indeterminate, but probably half-complete. No transactions there. -steve
Steve Loughran wrote: > > > libs out there. In particular, if javascript in IE/firefox/webkit > > > doesnt support the method, it aint going to be used in AJAX land. > > > > IE6 and Firefox support arbitrary methods. The fact that XHR in IE7 > > doesn't (right now) has been reported (and I think accepted) as a > > regression. > > any timetable for fixing that? Are you kidding? We're talking about Microsoft. :-). > I guess you could have an extra header saying "x-atomicity" or > something. The risk is that you, the client, talk to one server and > get an atomic experience, and all is well, then you go to another > implementation and suddenly you end up with race conditions, > concurrency problems, other callers seeing operations in the wrong > order, etc. Stuff that shows up when you go live, not in testing. In > WSRF there is/was no way to request atomic operations, or any way to > determine if an endpoint was going to be atomic or not. Which made the > bulk ops useless for atomic ops, and only for saving time over > long-haul links. > > Now, if you look at WebDAV specs, its semantics on the big move/copy > ops are effectively defined to be those of the win9x file system, i.e. > if the operation stops half way through, the outcome is indeterminate, > but probably half-complete. No transactions there. There are many backends where atomic move/copy of hierarchies just isn't implementable, thus WebDAV doesn't require it. If the server *does* support atomic move, it can implement REBIND (<http://greenbytes.de/tech/webdav/draft-ietf-webdav-bind-18.html#METHOD_REBIND>) which *is* atomic. Best regards, Julian
I'm curious to learn which frameworks/tools/languages people on this list are using to implement REST-style applications. As a Java person by pedigree (not fealty) I've been trying to shoe-horn Spring's MVC components into a more resource-oriented and HTTP-friendly framework. However, I feel as though I'm swimming upstream in the servlet world. I've looked at Restlet and, of course, Rails. Is there anything else that one might recommend? Justin
http://www.djangoproject.com/ On 6/19/07, Justin Makeig <jm-public@...> wrote: > I'm curious to learn which frameworks/tools/languages people on this list are using to > implement REST-style applications. As a Java person by pedigree (not fealty) I've been trying > to shoe-horn Spring's MVC components into a more resource-oriented and HTTP-friendly > framework. However, I feel as though I'm swimming upstream in the servlet world. I've looked > at Restlet and, of course, Rails. Is there anything else that one might recommend? > > Justin > > > > > Yahoo! Groups Links > > > > -- Hugh Winkler Wellstorm Development http://www.wellstorm.com/ +1 512 694 4795 mobile (preferred) +1 512 264 3998 office
On 6/10/07, Jan Algermissen <algermissen1971@...> wrote:
> Hi,
> The background for the question is the Reliable-POST issue and it has
> been raised that, when the server supplies unique IDs for the client
> to include in its POST requests, malfunctioning caches would make it
> possible for two clients to receive the same ID.
I would use POST, not GET, to distribute those unique IDs. I'd also
make those IDs first class resources:
http://bitworking.org/news/201/RESTify-DayTrader
-joe
--
Joe Gregorio http://bitworking.org
On 6/13/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > "In an interview at IBM's Impact 2007 conference, Jerry Cuomo, CTO > for IBM WebSphere, noted that he was recently named an IBM Fellow and > it is changing the way he thinks about how WebSphere fits into the > Web services and service-oriented architecture (SOA) world. "One of > the things you're supposed to do as a Fellow is be thoughtful and not > just react," he said. That may explain why he did not react to > questions about the more controversial aspects of Java technology in > the same way as some others in the Java platform industry do. He is > taking the long view beyond Java to innovations using REST and Web- > oriented architecture (WOA) or as he terms it "SOA on the Web." > > This is from http://searchwebservices.techtarget.com/qna/ > 0,289202,sid26_gci1257544,00.html?track=sy80 Just a topical side note, I work for Jerry. -joe -- Joe Gregorio http://bitworking.org
[ Attachment content not displayed ]
On 6/19/07, Justin Makeig <jm-public@...> wrote: > I'm curious to learn which frameworks/tools/languages people on this list are using to > implement REST-style applications. Perl. No framework yet, though I'm looking right now at Catalyst (Catalyst::Action::REST, specifically) to see if it'd gain me anything.
On Tue, 19 Jun 2007, Karen wrote:
> On 6/19/07, Justin Makeig <jm-public@...> wrote:
> > I'm curious to learn which frameworks/tools/languages people on this list are using to
> > implement REST-style applications.
>
> Perl. No framework yet, though I'm looking right now at Catalyst
> (Catalyst::Action::REST, specifically) to see if it'd gain me
> anything.
In Perl there's REST::Application [1] which, in typical Perl fashion is fairly
confusing and fairly powerful and flexible. If I were going to write anything
webby in Perl these days I'd start from REST::Application. But given a free
choice, I'd start from Python, assembling my own tools from WSGI compliant
stuff.
R::A is what's driving Socialtext's sort of REST API [2] and it's doing that
well. I say sort of because there are some things which are not done yet or
done in an expedient way.
[1] http://search.cpan.org/dist/REST-Application/
[2] https://www.socialtext.net/st-rest-docs/index.cgi?socialtext_rest_documentation
--
Chris Dent http://burningchrome.com/~cdent/mt
[...]
On Tue, Jun 19, 2007 at 10:11:29AM -0700, Chris Dent wrote: > But given a free choice, I'd start from Python, assembling my own > tools from WSGI compliant stuff. What about on the client side? I'm curious what Python libraries people have found useful for clients - beyond the obvious httplib, which is fine as far as it goes. I'm more thinking of all the parts of REST other than HTTP, eg. hypertext. I'm thinking something along the lines of testbrowser might be useful, but AFAICT it only deals with HTML: http://cheeseshop.python.org/pypi/zope.testbrowser/ ... doctest examples at: http://svn.zope.org/zope.testbrowser/trunk/src/zope/testbrowser/README.txt?rev=76064&view=markup -- Paul Winkler http://www.slinkp.com
Paul Winkler <pw_lists@...> writes: > On Tue, Jun 19, 2007 at 10:11:29AM -0700, Chris Dent wrote: >> But given a free choice, I'd start from Python, assembling my own >> tools from WSGI compliant stuff. > > What about on the client side? I'm curious what Python libraries > people have found useful for clients - beyond the obvious httplib, > which is fine as far as it goes. I'm more thinking of all the parts of > REST other than HTTP, eg. hypertext. http://bitworking.org/projects/httplib2/ is really, really excellent. A lot of the time though I find I just use the feedparser: http://www.feedparser.org/docs/ I agree that something built on top of those would be cool. That's actually where I want framework help. On the server side it seems pretty simple. -- Nic Ferrier http://www.tapsellferrier.co.uk
A. Pagaltzis wrote: > > > That's what I think is great about the AtomPP: it > > > provides a base for a large array of services by > > > defining a few base media types and a number of HTTP > > > transactions complete with meanings, granular enough > > > to be useful without customisation, but with clear > > > extension hooks throughout. > > > > > To me this is too close to "the one Media Type to rule > > them all" (what would we do w/o Tolkien?!?). AtomPP > > seems to me to be more of a well-defined conduit for > > implementing services than anything that would help > > identify specific services, but I haven't been > > following it closely enough to know for sure. > > > > So at the risk of having an opinion based on ignorance, > > I'd say that AtomPP would be a great base but we still > > need to be able to define Webful APIs for specific > > services, i.e. > > > > "application/atomserv+xml/events" > > > > "Event"s here would still be vetted by a working group > > somewhere and still need to be registered with IANA. > > > > How does that sit with you? > > > Not sure. I appreciate the concern, but I'm coming to > different conclusions. > > Looking at `application/atomsvc+xml` tells a client where > to look for links, and what they mean (well mostly; there > may be extension elements and link relationships it > doesn't know about, but that's fine). And generic clients > for AtomPP are certainly going to be written. (Going to? > Have been.) > > What specific use case the AtomPP server in question is > designed for should hopefully not matter - not to the > point of needing a new media type anyway. If it differs > that much, it won't be AtomPP anymore and should be > designed from scratch. > > (If it doesn't differ that much, then describing the > difference within the document, such as by using the > feature extension, would be enough; intermediaries, > f.ex., wouldn't be affected by the specific use case.) > > And I do fully expect we will see other generic > REST-based protocols, mostly non-overlapping in the class > of target use cases. That is fine and I consider it a > sign of a healthy ecosystem. From another part of my > message that you quoted: > > > > We need a middle ground: a small variety of somewhat > > > generic media types that can be used for a wide > > > variety of things. Individual services can then use > > > one of them, and clients can then be implemented as > > > glue on top of a library. > > > > I think our difference in view is that you see AtomPP as > ruling the market, possibly? My expectation is that it > will see strong adoption, and maybe even capture a > majority of the pie (a big maybe here), however I don't > think it will take all. Therefore I'm not that worried. > > What might be useful is to have payload media types for > common applications, for reuse across REST-based > protocols. But that is something I think will happen > naturally in a non-coordinated fashion anyway. > > For now I just want to watch AtomPP and see how things > shake out. REST outside the browser hasn't really been > done in anger on the open web yet, and I have little > faith in our collective ability to anticipate what will > work and what not so much. Actually, your last paragraph sums it up for me too. What I'm struggling with is this: I think things that are too easy to create and change result in interoperability problems. Imagine people creating hundreds of HTTP methods? Zero interoperability. So the easier it is to create new content types, the more people will do it. Or put another way, if AtomPP becomes the universal content type with an infinite number of (essentially) "subtypes" then interoperability will be limited to the least common denominator for AtomPP. If apps depend on higher level functionality, they won't work and without, they won't do much interesting. But honestly, I don't have an answer for it all, I'm just pontificating. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
A. Pagaltzis wrote:
> > > That's very simple: if the URI is constructed based
> > > on hypermedia, then the application is RESTful. If
> > > not, it's not.
> > >
> > But it is a point that is not even addressed by many
> > when discussing with people who are learning REST, and
> > those newly christened RESTians go forth and preach the
> > dogma that URIs cannot be constructed, period. My
> > "tiresome" comments are meant to shine a light on the
> > issue to discourage even more furture cargo-cultists.
> >
> You mean you assume that Jon doesn't know what REST means
> and doesn't mean?
That's not what I assume at all. I have a high respect for Jon's knowledge
of REST. I assume however that people may hear or read Jon's words about
REST (or your's or mine for that matter) understanding the words but not the
essence.
> > > > Heh. Any and every REST system will fail that test.
> > > > After all, how do you change the entry point URL?
> > > > '-)
> > > >
> > > Not by changing the code, hopefully.
> > >
> > > In fact, you could make that URI a link in a resource
> > > you control, in which case the REST client would in
> > > fact not need to change *at all*.
> > >
> > > See? We could split hairs about the exceptional case
> > > of the entry point all day long. :-)
> > >
> > > What matters is you know what Jon meant; and you do.
> > >
> > That's a marginal case on the open Internet. Publishing
> > an API for others to consume ensures that this will
> > almost certainly not be the case. My bringing it up was
> > NOT splitting hairs, it was to make the point that REST
> > is not pure as physics is pure, and that there are edge
> > case problems with the hypermedia constraint. As every
> > REST system could theoretically be composed to make a
> > larger REST system, the once published entry point now
> > becomes verboten to be constructed.
> >
> Uh, nowhere did I admit that it needs to be constructed.
> The client needs to receive *some* entry point URI out of
> band. This is called a "bookmark." Why would the client
> ever *construct* one?
>
> So since your premise looks false to me.
Sorry, I misspoke. I sometimes get my wires crossed when debating multiple
orthogonal issues. I meant to say that the once entry point now much be
followed by hypermedia whereas when it was not part of the larger system it
was typed in directly.
> > And the converse is also true, that many REST services
> > could be decomposed into smaller independent services.
> > When the decomposition occurs, what is the entry point?
> > Is it a constructed URL, or did you have to follow
> > hypermedia from the larger service to get to it? And if
> > that larger service is then composed with yet more
> > services, where are the valid entry points that don't
> > require hypermedia? Thus I see a problem with the
> > hypermedia constraint because it does not scale upwards
> > or downwards.
> >
> . then necessarily I must consider your conclusion false
> too.
Please revisit it now that I have clarified my intent.
> > While I see it's theoretical benefit, I see problems in
> > its real world use as just described and that's why I
> > think it is so important to actively encourage the
> > incorporation of URL composition into the mix at all
> > levels. By encouraging URL construction using
> > templates, REST services will be more easiler able to
> > scale albeit there will still be edge problems but less
> > so.
> >
> No, they will just be easier to create, but harder to
> maintain, because they'll be stronger coupled - unless
> the URI template comes from hypermedia.
I've been trying to say that URI templates would come from hypermedia! But
(and I'm probably the cause of the confusion) the orthogonal issue of entry
point is still an issue when one starts looking at composition and
decomposition.
Ultimately, compared to all other constraints of REST, I see hypermedia as
being "nice to have in many cases, a must in many others, and not nice to
have in a few." Of course, I'm sure that RoyTF would disagree. :)
> Additionally, non-hypermedia based URI construction has
> no scalibility effect in the small and a second- order negative
> one in the large.
huh?
> > Services would compose URLs based on templates, but
> > where the template comes from is the edge case yet that
> > can easily be provided by the larger service when
> > services are composed. As is, services faithfully the
> > following hypermedia constraint are ironically brittle
> > with respect any changes involving composition or
> > decomposition.
> >
> I really can't follow that conclusion.
Yes, my words were poorly written. Essentially, services that follow
hypermedia are also as brittle as services that hard-code URLs when they are
composed with other services. and using templates reduces some of that
brittleness because they are by nature parameterized. And yes, all can be
mitigated if careful, but the same is true if hypermedia is not used.
> > What's more, assuming an arbitrary Internet-published
> > REST-based API, it is much easier to program a direct
> > resource retrieval using URI composition than it is to
> > program a hypermedia-following resource retrieval,
> > partly because there are absolutely no standards for
> > such discovery and retrieval leaving the hapless
> > developer or entrepreneur to code it themselves.
>
> That's why AtomPP is such a huge deal.
That is something I plan to study. I reserve judgement to see how long
before conformant AtomPP libraries are deployed on all platforms and I'll
also be interested to see if AtomPP is sufficiently simple that "average
people" will be able to publish AtomPP feeds. I currently remain skeptical.
> > For a great developers it is trivial, but for many
> > smaller businesses or entrepreneurs w/o hotshot
> > developers on staff it is not. So a company publishing
> > a web API can either tell it's potential users to
> > follow the pure REST hypermedia model, or do URI
> > construction. And if they do the former, they are
> > likely to get a lot more people using it. Which would
> > you chose? If you say hypermedia, I can tell we are
> > discussing a hypothetical question and not one on which
> > your livelihood depends.
> >
> Seeing as I'm primarily a Perl hacker, I'll just point to
> WWW::Mechanize for this matter.
The fact you are a Perl hacker you are making my point. Being able to
program in Perl means you've got more needed for these things than >99% of
the population. Enabling only the elite does not make for web scale.
> Proof's in eating the pudding. Doing hypermedia is very
> easy given tooling that abstracts the rote work.
It is not easy unless you are very experienced, as you are. Things that are
trivial for a Perl hacker are beyond comprehension for an HTML+CSS+(very
little)Javascript web guy.
> I'm basing this on actual experience, not hypothesis,
Whose experience? Your own? Or experience working with lots of more
business-than-technically oriented web developers? FYI, I have a group of
over 200 such people in a group I organize a monthly meeting for.
> much as you'd like to paint
> the REST proponents with the ivory tower brush.
You mean just as much as you'd like to put words in my mouth? '-) Not ivory
tower; I assume RESTians are practitioners, just very, very good ones.
> > Finally, for an open API published on the web, I am
> > almost willing to argue that textually publishing the
> > URL format and encouraging people to do URI
> > construction w/o hypermedia is okay assuming the
> > company is willing to maintain those URLs. After all, I
> > can't see any reason why Amazon couldn't commit itself
> > to maintain its services at http://services.amazon.com/
> > where to get info on one of the items they sell you
> > would just append their "ASIN" to the end of their
> > "items" URL
> > http://services.amazon.com/items/1234567890/
> >
> That's fine for Amazon. It's not so fine at the other end
> of the wire, because then the other end of the wire is an
> Amazon client as opposed to a web shop client. Of course
> Amazon has no incentive to care about that.
What's a web shop client?
And if the Amazon service is generalized to many other sites then your
reason doesn't argue against using URIs constructed on the client; they
would just uses a different local template.
> > There are many things in life where companies need to
> > put a stake in the ground and then maintain that stake,
> > i.e. car makers have to maintain spare parts for their
> > cars for many years, I see no reason why it should be
> > absolutely forbidden for companies to publish REST apis
> > for the open Internet that do not require hypermedia to
> > discover and parse.
> >
> Imagine if every car company had their own designs for
> screws, nuts, bolts, lightbulbs, batteries, tires, etc.
> complete with car-maker-specific screwdrivers,
> rechargers, tire inflators etc., with a stated promise
> that production of these parts would be maintained
> indefinitely.
Bad analogy. Instead imagine that every car company were told best practice
was not allow screwdrivers to unscrew screws but instead that every screw
must have a mechanism that unscrews the screw when it is unscrewed (i.e. a
level of indirection.)
Actually, your example makes my point; there are standards for screws and
once we know the standard we can mint a new screw. With the hypermedia
constraint we are not allowed to know how to create a screw we can only ask
for the document that explains the screw each time we want to create one.
My point is that it would be better to know what URLs for certain classes of
well-established services then always having to first look them up which is
often an unnecessary level of indirection for well-established services.
We could also discuss the "well-known name" problem (i.e. like favicon.ico)
where we stated that a service would always be at /services/whatever, but
that's a whole 'nother can o'worms (and I frankly don't avocate trampling on
URL space by using well-known names.)
> > The hypermedia constraint is simply the web's example
> > of the more general abstraction and indirection pattern
> > used to improve maintainability of systems throughout
> > software development. But as experience has shown us,
> > too much abstraction and too much indirection make for
> > too much complexity, and that pill can at times be
> > worse than the ailment it attempts to cure.
>
> Smalltalk is based on an indirection at the core of the
> language semantics level, and practice has since shown
> that extremely late-bound messaging communication leads
> to much more flexible and resilient systems than are
> possible with static early binding.
"practice has since shown?" Want to qualify that?
Nevermind, I agree in general but think that with most things the hypermedia
constraint should not be considered a silver bullet.
> > I know this as I have often tried to over-generalize a
> > system only to find I'd made it too complex to work
> > with.
> >
> And you found no cases where too little indirection made
> things too hard? Beware of confirmation bias.
I never stated that; you again put words in my mouth. I was not addressing
that path because I had no need to.
> > Sometimes it is better to simply hardcode something
> > than to make it too complex. And I'd argue that
> > plublished open apis on the Internet could well be a
> > valid place where URLs could be reasonably hardcoded.
>
> If that's the case, then AtomPP, which is
> machine-readable hypermedia writ large, will crash and
> burn.
>
> I'll let history be the judge of that, but I think I can
> already tell what history will have had to say about this
> one.
You are arguing a false dichotomy. That's a tired and insincere debate
technique. Stop it.
> > My guess is that you deal with internal systems a lot
> > more than you deal with the open internet. Maybe that's
> > why your bias differs from mine.
>
> Funny you should say that, as the SOAP/WS-* philosophy
> with its early binding/tight coupling/code gen mindset
> comes from internal systems rather than the open web.
>
> And no, I don't deal much at all with internal systems.
My apologies for making an incorrect assumption.
> Don't you think I'd believe much more in tools if that
> were the case?
I don't really know what you believe, I don't know you that well.
> > > But I think this matter will straighten itself out
> > > over time as more people absorb the lessons and apply
> > > it in practice, coming away with examples from
> > > experience.
> > >
> > > It's just the natural process of adoption.
> > >
> > ...and as people like me, and you, have these
> > *tiresome* debates.
> >
> What I found tiresome is not the debate but rather your
> desire to be be controverted, leading you to incessantly
> make up sentiments like "begrudging" out of thin air. I'm
> not here for an interest in claims about each other's
> supposed belief systems.
Reread your email to me. Be careful that you don't become what you most
despise.
> As for my own bias, I'll note that my lightbulb moment
> regarding hypermedia was just two weeks or so ago and I'm
> since realigning my understanding of REST as a whole
> already.
And I'll repeat: "There are none so zealous as the newly converted." '-)
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
Aristotle Pagaltzis wrote: > > > For a great developers it is trivial, but for many > > > smaller businesses or entrepreneurs w/o hotshot > > > developers on staff it is not. So a company > > > publishing a web API can either tell it's potential > > > users to follow the pure REST hypermedia model, or do > > > URI construction. And if they do the former, they are > > > likely to get a lot more people using it. Which would > > > you chose? If you say hypermedia, I can tell we are > > > discussing a hypothetical question and not one on > > > which your livelihood depends. > > > > Seeing as I'm primarily a Perl hacker, I'll just point > > to WWW::Mechanize for this matter. Proof's in eating > > the pudding. Doing hypermedia is very easy given > > tooling that abstracts the rote work. I'm basing this > > on actual experience, not hypothesis, much as you'd > > like to paint the REST proponents with the ivory tower > > brush. > > To expand on this point: saying "it's easy for great > developers to write hypermedia clients" sounds to me like > the following would if the clock were turned back 15 > years: "it's easy for great developers to implement HTTP > clients". > > So imagine the clock having turned forward a few years > and then consider the argument that hypermedia is hard to > program to again. Ah, but I'm not talking about 2022, I'm talking about 2007. As times change best practices change. Imagine me saying "Let's set up an international discussion group for regular people" in 1945. That would be incomprehensible. RoyTF's thesis has an abstract purity to it, but pragmatically the hypermedia constraint is less then fully workable given current realities. Given *those* constraints, it could well be that the better solution IS to offer up documentation for URL construction and avoid the hypermedia constraint. That's the hypothesis I'm stating. > Hypermedia has the same simplicity story; it's easy to roll > tooling for it No it is NOT. For a Perl hacker, yes. For an average Joe, no. Want to accelerate that? Start writing open-source clients for well-known services in the mainstream languages (see the P.S.) > and in due time the market will have consolidate on Good > Enough existing tooling so that people won't have to, That's I'll give you, and my position will probably need to be revised when that reality exists. FYI, I will thank you however, for this dialog as it has helped me understand my own thoughts on the subject. I previously felt that there was something wrong with the hypermedia constraint but could not put my finger on it exactly. Now I know it is simply that, at this point in time, publishing services requiring hypermedia traversal is less optimal for a class of potential services than allowing clients to hardcode and/or construct URLs. As tools evolve and become ubiquitious, and as well-known services and their open-source clients emerge, use of hypermedia will evolve to being ideal in most if not all cases. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us P.S. BTW, have you signed up to the www.simplewebservices.org list where we are discussing those future well-known services?
Given the importance of caching to REST, I thought that it was worth highlighting this blog post by Mark Nottingham http://www.mnot.net/blog/2007/06/20/proxy_caching Here are some quotes of what caught my attention: "The bad news is that more complex functionality is spottily supported, at best. I suspect this is because everyday browsing doesn't exercise HTTP like more advanced uses like WebDAV, service APIs, etc." Request-URIs "Every implementation was able to handle 1024 byte long request URIs, but only a few were configured to allow 8192 bytes." HTTP Methods "GET, HEAD, POST, PUT, DELETE, OPTIONS, and TRACE all seemed to work OK, but quite a few caches had problems with extension HTTP methods. If you're using non-standard HTTP methods (or even some of the more esoteric WebDAV methods; there are a lot of them), beware." Conditional Requests "Validation was good in the simple cases, but tended to fall down in more complex circumstances, especially in situations with weak ETags, If-Range headers and other not-so-common things." Cache Updates "Caches are required to be updated by the headers in 304 responses, as well as responses to HEAD [...] In practice, updates were spotty [...] As a result, it's probably not a good idea to rely on 304 responses or HEAD requests to update headers; better to just send a 200 back with a whole new representation." Cache Invalidation "Sadly, one of the most useful parts of the caching model, invalidation by side effect, isn't supported at all. A few implementations would invalidate the Request-URI upon a DELETE, and even fewer upon PUT and DELETE, but that's it. As a result, it's harder to take full advantage of the cache, because you'll have to mark things as uncacheable if you care about changes being available immediately." Warnings "The Warning header is almost never generated by implementations, as far as I saw; disappointing. Don't rely on getting warnings from caches about stale responses [...]" I would have liked to see something about Vary. Otherwise, very interesting. Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
Hi,
in Joe's RESTified Day Trader[1], isnt't the proposed solution:
Client 'pending order' collection
-------------------- POST --------------->
<--------------- 201 Created -------------
(Location: pending order)
Client 'pending order'
-------------- PUT ------------->
(order)
<--------- 303 See Other --------
(Location: open order URI)
actually redirecting the PUT (as opposed to performing the PUT and
redirecting the client)?
The 303 tells the client to PUT elsewhere and *not* that the PUT has
been performed (which
would be 2xx).
So we end up with the need for PUTing two times, don't we?
Or am I missing something?
Jan
[1] http://bitworking.org/news/201/RESTify-DayTrader
303 is more like a 2xx in that it assumes the request was successful, but then sends the client to the other URI to find the results of the processing of the request. On 6/20/07, Jan Algermissen <algermissen1971@...> wrote: > Hi, > > in Joe's RESTified Day Trader[1], isnt't the proposed solution: > > > > Client 'pending order' collection > -------------------- POST ---------------> > <--------------- 201 Created ------------- > (Location: pending order) > Client 'pending order' > -------------- PUT -------------> > (order) > <--------- 303 See Other -------- > (Location: open order URI) > > > actually redirecting the PUT (as opposed to performing the PUT and > redirecting the client)? > > The 303 tells the client to PUT elsewhere and *not* that the PUT has > been performed (which > would be 2xx). > > So we end up with the need for PUTing two times, don't we? > > Or am I missing something? > > Jan > > > > > [1] http://bitworking.org/news/201/RESTify-DayTrader > > > > > Yahoo! Groups Links > > > >
On 6/20/07, Jan Algermissen <algermissen1971@...> wrote: > Hi, > > in Joe's RESTified Day Trader[1], isnt't the proposed solution: > > Client 'pending order' collection > -------------------- POST ---------------> > <--------------- 201 Created ------------- > (Location: pending order) > Client 'pending order' > -------------- PUT -------------> > (order) > <--------- 303 See Other -------- > (Location: open order URI) > > actually redirecting the PUT (as opposed to performing the PUT and > redirecting the client)? From http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.4 10.3.4 303 See Other The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. -joe -- Joe Gregorio http://bitworking.org
On 6/19/07, A. Pagaltzis <pagaltzis@...> wrote: > > Now if only those darn distributed object systems worked to be fair, they do sometimes work in a (reliable) LAN, though it depends on you having the ability to keep every version of the software in perfect sync, which currently means dynamic classloading, and it does also need developers to think of (and test for) distribution right from the outset. What you end up doing is creating one single application that spans multiple machines, sharing code as well as data across them, and hoping your dist-object framework of choice can handle distributed GC with some bounded reliability. What they dont do is scale to the long haul, or across versions and applications, which is of course the SOAP story. I have heard a really funny story about the european grid (EGEE), and the CERN atlas project, that I may have to blog about, even though SOAP itself is not the real culprit, more architectural decisions about what kind of jobs run on a grid -steve
On 6/20/07, Steve Loughran <steve.loughran.soapbuilders@...> wrote: > I have heard a really > funny story about the european grid (EGEE), and the CERN atlas > project, that I may have to blog about, even though SOAP itself is not > the real culprit, more architectural decisions about what kind of jobs > run on a grid > Here it is. the big EU grids, EGEE, LCG, etc, all have big schedulers and have been using some of the stuff standardised in the grid standards bodies (GGF -> OGF), etc, plus stuff written themselves. Atlas, one of the big experiments, has been busy running long-lived jobs that run for days, keep server utilisation up, act as a nice success story to validate the standards at work, etc. but it turns out that what they are scheduling, Cronus, is actually a commercial grid application, Condor, that uses boring old TCP and UDP instead of WS-whatever to talk, and bypasses all the federation and load management: https://twiki.cern.ch/twiki/bin/view/Atlas/CronusVirtualComputingCluster http://cdfcaf.fnal.gov/doc/EuroCondor06/GridCAF.pdf The result is the cronus nodes get scheduled and then take over the systems, validating their config, re-advertising under condor if they are healthy, and accepting shorter lived jobs with much better responsiveness and reliability http://indico.cern.ch/materialDisplay.py?contribId=30&sessionId=4&materialId=slides&confId=5060 Most amusing. I wll note that their logging needs something to be desired: http://indico.cern.ch/materialDisplay.py?contribId=139&sessionId=4&materialId=slides&confId=5060 something like map/reduce, it would seem to me. There is going to be some workshop down in geneva on logging shortly, if it isnt already past... -steve
I know the issue of whether a partial representation can be PUT has come up in at least a couple of threads in this list already, eg "Partial PUT..." and "Bass-ackwards" (and has come up in at least a couple of threads in AtomPP). But I have not seen any definitive resolution of the debate. Worse, after a couple of hours doing my homework on the issue (ie searching the AtomPP mailing list and this one), I have discovered that some participants in the debate seem to have changed their minds on the subject (eg Mark Baker and Joe Gregorio). Here are the most relevant positions <http://trailfire.com/ironick/trails/40085/The%20Ambiguous%20Semantics%2\ 0of%20HTTP%20PUT%3A%20Complete%20vs.%20Incomplete%20Representations%20> I could find on the subject. I'd appreciate it if someone could clarify if consensus was ever reached in this list or the AtomPP list or any other "authoritative source". NOTE: I am NOT seeking people's individual opinions on the subject.[1] I AM seeking any evidence that the "experts" on REST/AtomPP/HTTP (RFC 2616) have come to agreement on this issue. What would be really useful is clarification on whether there is consensus for each of the set of constraints: Does RFC 2616 allow partial representations to be PUT? Does REST? Does AtomPP? Obviously, this matters a lot more now that Microsoft has published a draft of its Web3S spec, which enables partial updates via PUT (which Microsoft describes as "merge semantics"). One of the justifications for Web3S is that AtomPP does NOT allow partial updates to be PUT. If this is not true, then it might help close the gap between the two specs. Thanks. -- Nick [1] While I am NOT seeking individual opinions on the topic. I would welcome clarifications/corrections from the people whose positions I referred to in the above URL.
IIRC, on atom-protocol at least the issue was left like this: Servers can do what they want. Clients _cannot_ assume that a PUT does a complete replacement; if they want to clear out a field they need to GET (with ETag), empty the field, and then PUT (using If-Match) including all of the prior state. Of course even in that case servers can do what they want, including editing the resource after accepting it to conform to its own rules. (A reasonable example of this might be changing the authorship information if it conflicts with authentication credentials and the server's rules about who can do what.) As a data point, at least two APP services use partial PUTs in at least some circumstances in the field. In related news, James Snell just blogged that he's reviving Lisa's PATCH draft and fixing it up. -John Nick Gall wrote: > I know the issue of whether a partial representation can be PUT has > come up in at least a couple of threads in this list already, eg > "Partial PUT..." and "Bass-ackwards" (and has come up in at least a > couple of threads in AtomPP). But I have not seen any definitive > resolution of the debate. Worse, after a couple of hours doing my > homework on the issue (ie searching the AtomPP mailing list and this > one), I have discovered that some participants in the debate seem to > have changed their minds on the subject (eg Mark Baker and Joe Gregorio). > > Here are the most relevant positions > <http://trailfire.com/ironick/trails/40085/The%20Ambiguous%20Semantics%20of%20HTTP%20PUT%3A%20Complete%20vs.%20Incomplete%20Representations%20> > I could find on the subject. > > I'd appreciate it if someone could clarify if consensus was ever > reached in this list or the AtomPP list or any other "authoritative > source". > > NOTE: I am NOT seeking people's individual opinions on the subject.[1] > > I AM seeking any evidence that the "experts" on REST/AtomPP/HTTP (RFC > 2616) have come to agreement on this issue. What would be really > useful is clarification on whether there is consensus for each of the > set of constraints: Does RFC 2616 allow partial representations to be > PUT? Does REST? Does AtomPP? > > Obviously, this matters a lot more now that Microsoft has published a > draft of its Web3S spec, which enables partial updates via PUT (which > Microsoft describes as "merge semantics"). One of the justifications > for Web3S is that AtomPP does NOT allow partial updates to be PUT. If > this is not true, then it might help close the gap between the two specs. > > Thanks. > > -- Nick > > [1] While I am NOT seeking individual opinions on the topic. I would > welcome clarifications/corrections from the people whose positions I > referred to in the above URL. >
On 6/20/07, John Panzer <jpanzer@...> wrote: > > IIRC, on atom-protocol at least the issue was left like this: Servers can do what > they want. Clients _cannot_ assume that a PUT does a complete replacement; That is not the same issue. The question is what the message from the client means. 1.) Must a server obey everything in a client message in order to return 2xx ? 2.) Do omissions in a client PUT message unset those portions, or do they mean only update the included elements. The answer to #1 is: of course not. The answer to #2 is: they mean "unset". (simple formula: turn on generic PUTs in Apache and observe its behavior) Now, obviously, Atom servers are going to ignore client instructions if they try wacky things like deleting atom:id. But that is because of point #1. Nicks question concerns point #2. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Robert Sayre wrote: > On 6/20/07, John Panzer <jpanzer@...> wrote: > >> IIRC, on atom-protocol at least the issue was left like this: Servers can do what >> they want. Clients _cannot_ assume that a PUT does a complete replacement; >> > > That is not the same issue. The question is what the message from the > client means. > > 1.) Must a server obey everything in a client message in order to return 2xx ? > 2.) Do omissions in a client PUT message unset those portions, or do > they mean only update the included elements. > > The answer to #1 is: of course not. > The answer to #2 is: they mean "unset". (simple formula: turn on > generic PUTs in Apache and observe its behavior) > Messages only have useful semantics if both parties understand them. What I recall is that there was no consensus that Atom servers must choose "omit == unset" as opposed to "omit == don't care", and it's therefore unspecified (by AtomPub) what happens when you omit a field. Note that any client that cares about this must have already retrieved the original data it's modifying and it only wanders into unspecified territory if it starts dropping fields in the round-trip. If someone else thinks there was actually consensus on this point please let the AtomPub editor know about it. Personally I think that partial updates are really useful, and I would like to see them accomplished in a standard way, but that way doesn't have to be baked into the AtomPub core spec and in fact it's orthogonal to Atom. Unfortunately there wasn't an obvious clear winner for how to do partial updates. Personally I like PATCH for clarity but worry about deployment issues with intermediaries and libraries that even today think "all the world's a GET". My next choice would be a POST with a standard delta update MIME type to the resource, though that's a fallback position obviously. I look forward to seeing James Snell's proposed RFC on the subject. Thoughts? John
On 6/20/07, Robert Sayre <sayrer@...> wrote: > On 6/20/07, John Panzer <jpanzer@...> wrote: > > > > IIRC, on atom-protocol at least the issue was left like this: Servers can do what > > they want. Clients _cannot_ assume that a PUT does a complete replacement; > > That is not the same issue. The question is what the message from the > client means. > > 1.) Must a server obey everything in a client message in order to return 2xx ? > 2.) Do omissions in a client PUT message unset those portions, or do > they mean only update the included elements. > > The answer to #1 is: of course not. > The answer to #2 is: they mean "unset". (simple formula: turn on > generic PUTs in Apache and observe its behavior) I don't think you even need to go there with "unset". A PUT request requests that the server set the state of the targetted resource to that represented in the message. If the server does that - as determined by the server - then 2xx. If not, 4xx/5xx. "Unset" suggests that the message, by leaving stuff out, is requesting that said stuff be explicitly set to something (like a default). One could specify a media type which did that, but I don't know of any that do, and would consider it bad practice anyhow (PSVI anyone?). As to Nick's question, if interoperability depends on concensus rather than protocol, we're in big trouble. PUT means what it says in 2616 and I'm content to work with that. BTW, if we're going to dig any deeper into this, I think examples would be very helpful. Mark.
An XML initiative was started to describe partial XML updates (patches) for XML databases. See http://xmldb-org.sourceforge.net/xupdate/xupdate-wd.html#N56860b for the delete spec. I don't know how complete the spec ever got, but it certainly never gained support. John On 6/20/07, Mark Baker <distobj@...> wrote: > On 6/20/07, Robert Sayre <sayrer@...> wrote: > > On 6/20/07, John Panzer <jpanzer@...> wrote: > > > > > > IIRC, on atom-protocol at least the issue was left like this: Servers can do what > > > they want. Clients _cannot_ assume that a PUT does a complete replacement; > > > > That is not the same issue. The question is what the message from the > > client means. > > > > 1.) Must a server obey everything in a client message in order to return 2xx ? > > 2.) Do omissions in a client PUT message unset those portions, or do > > they mean only update the included elements. > > > > The answer to #1 is: of course not. > > The answer to #2 is: they mean "unset". (simple formula: turn on > > generic PUTs in Apache and observe its behavior) > > I don't think you even need to go there with "unset". A PUT request > requests that the server set the state of the targetted resource to > that represented in the message. If the server does that - as > determined by the server - then 2xx. If not, 4xx/5xx. > > "Unset" suggests that the message, by leaving stuff out, is requesting > that said stuff be explicitly set to something (like a default). One > could specify a media type which did that, but I don't know of any > that do, and would consider it bad practice anyhow (PSVI anyone?). > > As to Nick's question, if interoperability depends on concensus rather > than protocol, we're in big trouble. PUT means what it says in 2616 > and I'm content to work with that. > > BTW, if we're going to dig any deeper into this, I think examples > would be very helpful. > > Mark. > > > > Yahoo! Groups Links > > > > -- John D. Heintz Principal Consultant New Aspects of Software Austin, TX (512) 633-1198
On 6/20/07, John Panzer <jpanzer@...> wrote: > > Messages only have useful semantics if both parties understand them. The interface is uniform. All this hand-wringing is a result of people claiming that the meaning of a PUT request depends on the server implementation. The result of the message is what differs based on server implementation. > What > I recall is that there was no consensus that Atom servers must choose "omit > == unset" as opposed to "omit == don't care", and it's therefore unspecified > (by AtomPub) what happens when you omit a field. I don't care if AtomPub came to consensus on the wrong answer. > Note that any client that > cares about this must have already retrieved the original data it's > modifying and it only wanders into unspecified territory if it starts > dropping fields in the round-trip. If someone else thinks there was > actually consensus on this point please let the AtomPub editor know about > it. > Again, AtomPub's consensus doesn't matter. What you're saying is that there's no way for an Atom client to decide that an entry doesn't need a summary anymore. Broken. > Personally I like PATCH for clarity but worry about deployment > issues with intermediaries and libraries that even today think "all the > world's a GET". My next choice would be a POST with a standard delta update > MIME type to the resource, though that's a fallback position obviously. I > look forward to seeing James Snell's proposed RFC on the subject. Thoughts? XML deltas are incredibly complicated. Mark Baker wrote: > "Unset" suggests that the message, by leaving stuff out, is requesting > that said stuff be explicitly set to something (like a default). No, it doesn't. See the behavior of Apache when you send a PUT request containing an Atom entry. For "unset" to mean anything else, shared state would be required. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
On 20.06.2007, at 16:44, Mark Baker wrote: > 303 is more like a 2xx in that it assumes the request was successful, > but then sends the client to the other URI to find the results of the > processing of the request. Oh, right. Thanks Mark. Next potential problem: 303 does explicitly not license the client to infer that the 303 Location URI is 'a substitute reference for the originally requested resource'[1]. So, when Joe writes: "In the case of a successful, or duplicate, request the client will be directed to the corresponding open_order." The client actually cannot infer that the 303 Location identifies the corresponding resource. Is that a problem? Wouldn't the client at some point need a 301 to update its local reference to the order? Or is that deferred to the media type? Jan [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.4 > > On 6/20/07, Jan Algermissen <algermissen1971@...> wrote: >> Hi, >> >> in Joe's RESTified Day Trader[1], isnt't the proposed solution: >> >> >> >> Client 'pending order' collection >> -------------------- POST ---------------> >> <--------------- 201 Created ------------- >> (Location: pending order) >> Client 'pending order' >> -------------- PUT -------------> >> (order) >> <--------- 303 See Other -------- >> (Location: open order URI) >> >> >> actually redirecting the PUT (as opposed to performing the PUT and >> redirecting the client)? >> >> The 303 tells the client to PUT elsewhere and *not* that the PUT has >> been performed (which >> would be 2xx). >> >> So we end up with the need for PUTing two times, don't we? >> >> Or am I missing something? >> >> Jan >> >> >> >> >> [1] http://bitworking.org/news/201/RESTify-DayTrader >> >> >> >> >> Yahoo! Groups Links >> >> >> >> > > > > Yahoo! Groups Links > > >
Hi RESTians! The latest entry in my REST Dialogues series wraps up where I've been going with my 'Distributed Observer Pattern', AKA 'symmetric REST'. [The remainder of the series is a, hopefully less controversial, chat about the usual REST/ROA vs SOA stuff that we all know and love, including interoperability, transactions, security and messaging.] So, now that I've put all my, perhaps more radical, REST Business Logic and Integration cards on the table, would anyone be interested in engaging with me in discussion of this approach? Is it, um, really RESTful, for example?! =0) Cheers! Duncan
[ Attachment content not displayed ]
On Jun 20, 2007, at 11:43 PM, Duncan Cragg wrote: > Hi RESTians! > > The latest entry in my REST Dialogues series wraps up where I've been > going with my 'Distributed Observer Pattern', AKA 'symmetric REST'. > > [The remainder of the series is a, hopefully less controversial, chat > about the usual REST/ROA vs SOA stuff that we all know and love, > including interoperability, transactions, security and messaging.] > > So, now that I've put all my, perhaps more radical, REST Business > Logic > and Integration cards on the table, would anyone be interested in > engaging with me in discussion of this approach? > > Is it, um, really RESTful, for example?! =0) > > Cheers! > > Duncan Just in case, here is the link: http://duncan-cragg.org/blog/post/distributed-observer-pattern-rest- dialogues/ - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
> Just in case, here is the link: > > http://duncan-cragg.org/blog/post/distributed-observer-pattern-rest- > dialogues/ > > Oh, yes! Thanks! (um, just in case of what?!) Unfortunately, that link got wrapped and will show a nice stack trace from my Django blog code! Anyway, it all started here: http://duncan-cragg.org/blog/post/getting-data-rest-dialogues/ And, if even that gets wrapped, try just: http://duncan-cragg.org/blog/ There! Thanks again! Duncan
On Jun 21, 2007, at 12:30 AM, Duncan Cragg wrote: > >> Just in case, here is the link: >> http://duncan-cragg.org/blog/post/distributed-observer-pattern- >> rest- dialogues/ > Oh, yes! Thanks! (um, just in case of what?!) > > Unfortunately, that link got wrapped and will show a nice stack > trace from my Django blog code! > > Anyway, it all started here: > > http://duncan-cragg.org/blog/post/getting-data-rest-dialogues/ > > And, if even that gets wrapped, try just: > > http://duncan-cragg.org/blog/ > > There! Thanks again! > > Duncan > > Just in case somebody didn't know of it already! :) - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Nick, I think we have consensus that PUT should be "what RFC2616 says", and you correctly pointed out that the problem is that RCF2616 isn't really very precise on that matter. That being said, I'd recommend: - collaborating on PATCH so that partial updates have their specialized messaged, and - taking the issue about PUT's semantics to the HTTP mailing list, as something to be resolved in rcf2616bis (should that ever happen). Best regards, Julian
Forwarding from another list - this seems more appropriate for rest- discuss ... Begin forwarded message: > From: "Marc de Graauw" <marc@...> > Date: June 20, 2007 5:26:39 PM GMT+02:00 > To: <service-orientated-architecture@yahoogroups.com> > Subject: RE: [service-orientated-architecture] Anne on REST (Time > for Spring WS v. REST Campaign to Open) > Reply-To: service-orientated-architecture@yahoogroups.com > > Mark Baker: > > | On 6/19/07, Stefan Tilkov <stefan.tilkov@...> wrote: > | > You are right, I should have written "PUT is defined to be > | idempotent > | > (by the spec), POST is not - which means POST can or cannot be > | > idempotent". Seems a little like splitting hairs, but OK. > | > | Not quite. POST is defined to be non-idempotent, i.e. if you see an > | HTTP message with a POST request method, then that message is a > | non-idempotent message. What any particular receiving implementation > | *does* with such a message - idempotent or not - is an entirely > | separate issue. > > Jan Algermissen: > > | Actually, I'd say that every single invokation of POST is > | significant from a client's POV and it explicitly expects > | multiple POSTs to have distinct effects, or? > | > | IMO, POST *is* in fact non-idempotent...always. Per definition. > > I find it hard to find this in RFC2616, and it seems impossible as > well. > > RFC2616: > > "Methods can also have the property of "idempotence" in that (aside > from > error or expiration issues) the side-effects of N > 0 identical > requests is > the same as for a single request. The methods GET, HEAD, PUT and > DELETE > share this property." > > So PUT is idempotent, and POST not necessarily so, but where does > it say > that POST messages are _always_ non-idempotent? > > Seems impossible too. Presumably a non-idempotent method has the > property > that "the side-effects of N > 0 identical requests is NOT the same > as for a > single request" So if I POST twice, and get 500 with a body "Credit > expired" > twice, that's an idempotent message, right? In fact, if it get that > response > once, my expectation would be to get it the next time as well. We'd > have to > re-interpret RFC2616 as really saying "for POST the side-effects of > N > 0 > identical succesfull requests SHOULD NOT be the same as for a single > request", but this seems nonsensical to me. Whether a message is > idempotent > or not is up to the business logic behind it, and POST is telling > the client > it may not count on the message being idempotent, and must act > accordingly > (not just re-POST after failure). POST - IMO - isn't telling the > client > there are indeed such side-effects for N > 0 messages to be expected. > > I've always read RFC2616 as saying PUT is always idempotent, and > with POST > the client cannot count on it being idempotent or not, but correct > me if I'm > wrong. > > It also seems to leave an empty slot. With PUT, the client is > supposed to > create the URI, with POST, the server. So if I have some logic > where I want > the server to create the URI (who trusts those darn clients > anyway!), and > the logic is idempotent, what do I do? > > RFC 2616: "The fundamental difference between the POST and PUT > requests is > reflected in the different meaning of the Request-URI. The URI in a > POST > request identifies the resource that will handle the enclosed > entity ... In > contrast, the URI in a PUT request identifies the entity enclosed > with the > request". > > I can see how that leads to PUT being idempotent, but I cannot see > why this > would make POST necessarily non-idempotent. > > Marc de Graauw > > www.marcdegraauw.com > > >
On 6/21/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > Forwarding from another list - this seems more appropriate for rest- > discuss ... > > Begin forwarded message: > > > From: "Marc de Graauw" <marc@...> > > Date: June 20, 2007 5:26:39 PM GMT+02:00 > > To: <service-orientated-architecture@yahoogroups.com> > > Subject: RE: [service-orientated-architecture] Anne on REST (Time > > for Spring WS v. REST Campaign to Open) > > Reply-To: service-orientated-architecture@yahoogroups.com > > > > Mark Baker: > > > > | On 6/19/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > | > You are right, I should have written "PUT is defined to be > > | idempotent > > | > (by the spec), POST is not - which means POST can or cannot be > > | > idempotent". Seems a little like splitting hairs, but OK. > > | > > | Not quite. POST is defined to be non-idempotent, i.e. if you see an > > | HTTP message with a POST request method, then that message is a > > | non-idempotent message. What any particular receiving implementation > > | *does* with such a message - idempotent or not - is an entirely > > | separate issue. > > > > Jan Algermissen: > > > > | Actually, I'd say that every single invokation of POST is > > | significant from a client's POV and it explicitly expects > > | multiple POSTs to have distinct effects, or? > > | > > | IMO, POST *is* in fact non-idempotent...always. Per definition. > > > > I find it hard to find this in RFC2616, and it seems impossible as > > well. > > > > RFC2616: > > > > "Methods can also have the property of "idempotence" in that (aside > > from > > error or expiration issues) the side-effects of N > 0 identical > > requests is > > the same as for a single request. The methods GET, HEAD, PUT and > > DELETE > > share this property." > > > > So PUT is idempotent, and POST not necessarily so, but where does > > it say > > that POST messages are _always_ non-idempotent? > > > > Seems impossible too. Presumably a non-idempotent method has the > > property > > that "the side-effects of N > 0 identical requests is NOT the same > > as for a > > single request" So if I POST twice, and get 500 with a body "Credit > > expired" > > twice, that's an idempotent message, right? In fact, if it get that > > response > > once, my expectation would be to get it the next time as well. We'd > > have to > > re-interpret RFC2616 as really saying "for POST the side-effects of > > N > 0 > > identical succesfull requests SHOULD NOT be the same as for a single > > request", but this seems nonsensical to me. Whether a message is > > idempotent > > or not is up to the business logic behind it, and POST is telling > > the client > > it may not count on the message being idempotent, and must act > > accordingly > > (not just re-POST after failure). POST - IMO - isn't telling the > > client > > there are indeed such side-effects for N > 0 messages to be expected. > > > > I've always read RFC2616 as saying PUT is always idempotent, and > > with POST > > the client cannot count on it being idempotent or not, but correct > > me if I'm > > wrong. > > > > It also seems to leave an empty slot. With PUT, the client is > > supposed to > > create the URI, with POST, the server. So if I have some logic > > where I want > > the server to create the URI (who trusts those darn clients > > anyway!), and > > the logic is idempotent, what do I do? > > > > RFC 2616: "The fundamental difference between the POST and PUT > > requests is > > reflected in the different meaning of the Request-URI. The URI in a > > POST > > request identifies the resource that will handle the enclosed > > entity ... In > > contrast, the URI in a PUT request identifies the entity enclosed > > with the > > request". > > > > I can see how that leads to PUT being idempotent, but I cannot see > > why this > > would make POST necessarily non-idempotent. > > > > Marc de Graauw As I understand it, Marc is correct. PUT is required to be idempotent by RFC2616. POST simply has no requirement. It might be, it might not be - that's up to the server. The UA cannot rely on any behavioural expectation. Therefore, unless the server 'makes a promise' by providing support for some protocol layered on top of HTTP (such as AtomPub) - all bets are off. Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
Stefan Tilkov wrote: > > > Forwarding from another list - this seems more appropriate for rest- > discuss ... > ... I'd really like to see HTTP related discussions on the (former and possible future) HTTP WG's mailing list, see <http://lists.w3.org/Archives/Public/ietf-http-wg/>. Best regards, Julian
[ Here's the response I posted to Marc ] Hey Marc, On 6/20/07, Marc de Graauw < [send email to marc@... via gmail] marc@...> wrote: > So PUT is idempotent, and POST not necessarily so, but where does it say > that POST messages are _always_ non-idempotent? > > Seems impossible too. Presumably a non-idempotent method has the property > that "the side-effects of N > 0 identical requests is NOT the same as for a > single request" No, because that definition is in terms of server behaviour and not message semantics (which a protocol is defined in terms of). A message semantics oriented definition (of idempotence) would be that a series of N identical requests means the same as one. > I can see how that leads to PUT being idempotent, but I cannot see why this > would make POST necessarily non-idempotent. Because the definition is in terms of non-idempotent semantics, e.g "annotate a resource", "append to a database". Mark.
[ Attachment content not displayed ]
Nick Gall wrote: > > > On 6/21/07, *Julian Reschke* <julian.reschke@ gmx.de > <mailto:julian.reschke@...>> wrote: > > Nick, > > I think we have consensus that PUT should be "what RFC2616 says", and > you correctly pointed out that the problem is that RCF2616 isn't really > very precise on that matter. > > That being said, I'd recommend: > > - collaborating on PATCH so that partial updates have their specialized > messaged, and > > - taking the issue about PUT's semantics to the HTTP mailing list, as > something to be resolved in rcf2616bis (should that ever happen). > > > Fixing the ambiguity problem at its root, RFC 2616, sounds sensible to > me. I'll take a look at the relevant mailing lists to see what has been > discussed. Little, unfortunately. > Rather than wait for a revised 2616, it would be nice if there were > rough consensus on what the semantics for PUT should be. Is there? If Absolutely. The first step is getting consensus on the issue, the next step is proposing actual spec text. > not, I think it is important that deeper explanations of REST highlight > this fundamental disagreement among theorists and practitioners. (The > otherwise excellent "RESTFul Web Services" book does not seem to deal > with the issue, for example.) Perhaps I'll contribute something to the > REST wiki on the subject. (I looked there and did not see anything.) > > Finally, and most relevantly for this group, do any of the principles of > REST have a bearing on which way the semantics of PUT should be revised? I wouldn't think so. It seems to be purely a matter of HTTP semantics. My impression is that people try to do more things with PUT as they should, as they don't have PATCH. Maybe fixing the latter helps. > For example, does the principle of "identification of resources" > (everything via URI and a URI for everything) imply that elements of a > representation that change independently should have their own URIs, > enabling PUT to efficiently use only replacement semantics? I think if it's desired to be able to independently edit these resources, separate URIs are a very good way to achieve this. See XCAP (RFC4825) and JCR (JSR-170). > And if REST principles don't give any guidance as to whether replacement > and merge semantics should be combined in one method (PUT) vs. separated > into two methods (PUT/PATCH), does that say anything about the > prescriptive power of REST? Just asking, not dissing REST (you know I > love REST). <grin> No comment :-). Best regards, Julian
Mark Baker: | Hey Marc, | | On 6/20/07, Marc de Graauw < [send email to | marc@... <mailto:marc%40marcdegraauw.com> via | gmail] marc@... <mailto:marc%40marcdegraauw.com> > wrote: | > So PUT is idempotent, and POST not necessarily so, but | where does it say | > that POST messages are _always_ non-idempotent? | > | > Seems impossible too. Presumably a non-idempotent method | has the property | > that "the side-effects of N > 0 identical requests is NOT | the same as for a | > single request" | | No, because that definition is in terms of server behaviour and not | message semantics (which a protocol is defined in terms of). A This is copied from the RFC 2616 definition of idempotent (section 9.1.2), with NOT inserted to make a definition for non-idempotence, so if the definition is flawed because it is in terms of server behaviour, then the definition of idempotence in RFC2616 is flawed. Right? | message semantics oriented definition (of idempotence) would be that a | series of N identical requests means the same as one. | | > I can see how that leads to PUT being idempotent, but I | cannot see why this | > would make POST necessarily non-idempotent. | | Because the definition is in terms of non-idempotent semantics, e.g | "annotate a resource", "append to a database". RFC2616 contains a list of possible POST uses, but that list does not seem exhaustive. So the question remains: where in RFC2616 does it say that POST messages are always non-idempotent? And one issue is still open: if I have some logic where I want the server to create the URI but the logic is idempotent, what do I do? Marc de Graauw www.marcdegraauw.com
Robert Sayre wrote: > On 6/20/07, John Panzer <jpanzer@...> wrote: > ... >> What >> I recall is that there was no consensus that Atom servers must choose >> "omit >> == unset" as opposed to "omit == don't care", and it's therefore >> unspecified >> (by AtomPub) what happens when you omit a field. > > I don't care if AtomPub came to consensus on the wrong answer. I think it actually came to consensus that there was no consensus. Though I could be wrong; there could be no consensus about whether there was consensus. > >> Note that any client that >> cares about this must have already retrieved the original data it's >> modifying and it only wanders into unspecified territory if it starts >> dropping fields in the round-trip. If someone else thinks there was >> actually consensus on this point please let the AtomPub editor know >> about >> it. >> > > Again, AtomPub's consensus doesn't matter. What you're saying is that > there's no way for an Atom client to decide that an entry doesn't need > a summary anymore. Broken. PUT ... <summary/> ... > >> Personally I like PATCH for clarity but worry about deployment >> issues with intermediaries and libraries that even today think "all the >> world's a GET". My next choice would be a POST with a standard delta >> update >> MIME type to the resource, though that's a fallback position >> obviously. I >> look forward to seeing James Snell's proposed RFC on the subject. >> Thoughts? > > XML deltas are incredibly complicated. What about a simple "overwrite if element present" delta MIME type that handles the 80% case that everybody seems to want? > > Mark Baker wrote: >> "Unset" suggests that the message, by leaving stuff out, is requesting >> that said stuff be explicitly set to something (like a default). > > No, it doesn't. See the behavior of Apache when you send a PUT request > containing an Atom entry. For "unset" to mean anything else, shared > state would be required. > Apache's ETag support is also borked (or was); should we use Apache as the reference for ETags? Just sayin'.
On 6/21/07, John Panzer <jpanzer@...> wrote: > Robert Sayre wrote: > > > > Again, AtomPub's consensus doesn't matter. What you're saying is that > > there's no way for an Atom client to decide that an entry doesn't need > > a summary anymore. Broken. > PUT ... <summary/> ... Not convincing. Try this: How do I remove a category, if there are three categories listed? > Apache's ETag support is also borked (or was); should we use Apache as > the reference for ETags? > > Just sayin'. ... -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Nick, On 6/21/07, Nick Gall <nick.gall@...> wrote: > Mark, you completely lost me with "if interoperability depends on > [consensus] rather than protocol, we're in big trouble." As I understand > these admittedly slippery terms, the two (or more) parties that are > attempting to interoperate must agree on how the protocol they are jointly > using works. And consensus is defined as general agreement. > > Perhaps you are making a distinction based on the degree of unanimity of the > consensus. That is, you do not think complete agreement is required. If so, > I agree. But it is my understanding that all IETF "standards" require rough > consensus and running code. So I don't think seeking rough consensus will > get us into big trouble, and that's all I'm looking for. I think a lack > thereof will get us into trouble. "Rough concensus and running code" is for when you're working on a spec. My point is that we've already got a spec, and no matter what we might agree to here, folks will look to the spec for the answer, not us. That's not to say that we can't improve the wording for 2616bis though. > I was hoping such rough consensus had been reached in some forum I was not > aware of, but it seems instead that it has not yet been reached. I dunno. Does anybody disagree with "set the state of the targetted resource to that represented in the provided representation?" I suppose that question should really be asked on ietf-http-wg though. Mark.
Marc, You need to say "NOT NECESSARILY the same as for a single request", because that language means the absence of the constraint, whereas your other language is the logical negation of the constraint, a wierd and probably useless notion. I don't get the "server behavior" vs "message semantics" thing, and I think you can ignore that :-). The definition of "non-idempotent" in RFC 2616 is "flawed" in the sense that it is not defined, and hence you have gone off in the weeds attempting to interpret it. RFC2616 doesn't say anything about POST and idempotence, and that's the point. In the absence of such a costraint, POST is free to be: POST(POST(x) != POST(x). But that's freedom, not constraint, so POST(POST(x) == POST(x) is also allowed, just not promised. As for your open question, you're not really describing the application goal very well, but I think the answer will still be found in two principles: 1. Allow the server to manage the creation of identity to the degree that you trust it to do so. That could mean using POST even when you guess a new identity should not be minted. (But should you (client) really be the authority?) 2. When the client needs the promise of idempotence, arrange to have the client in possession of the identifier of the thing where it cares about side effects. Beware: following those principles could lead to loose coupling! HTH, Walden ----- Original Message ----- From: "Marc de Graauw" <marc@...> To: "'Mark Baker'" <distobj@...> Cc: "'Rest List'" <rest-discuss@yahoogroups.com> Sent: Thursday, June 21, 2007 3:54 PM Subject: RE: [rest-discuss] Must POST be non-idempotent? : Mark Baker: : : | Hey Marc, : | : | On 6/20/07, Marc de Graauw < [send email to : | marc@... <mailto:marc%40marcdegraauw.com> via : | gmail] marc@... <mailto:marc%40marcdegraauw.com> > wrote: : | > So PUT is idempotent, and POST not necessarily so, but : | where does it say : | > that POST messages are _always_ non-idempotent? : | > : | > Seems impossible too. Presumably a non-idempotent method : | has the property : | > that "the side-effects of N > 0 identical requests is NOT : | the same as for a : | > single request" : | : | No, because that definition is in terms of server behaviour and not : | message semantics (which a protocol is defined in terms of). A : : This is copied from the RFC 2616 definition of idempotent (section 9.1.2), : with NOT inserted to make a definition for non-idempotence, so if the : definition is flawed because it is in terms of server behaviour, then the : definition of idempotence in RFC2616 is flawed. Right? : : | message semantics oriented definition (of idempotence) would be that a : | series of N identical requests means the same as one. : | : | > I can see how that leads to PUT being idempotent, but I : | cannot see why this : | > would make POST necessarily non-idempotent. : | : | Because the definition is in terms of non-idempotent semantics, e.g : | "annotate a resource", "append to a database". : : RFC2616 contains a list of possible POST uses, but that list does not seem : exhaustive. So the question remains: where in RFC2616 does it say that POST : messages are always non-idempotent? : : And one issue is still open: if I have some logic where I want the server to : create the URI but the logic is idempotent, what do I do? : : Marc de Graauw : : www.marcdegraauw.com : : : : : __________ NOD32 2343 (20070621) Information __________ : : This message was checked by NOD32 antivirus system. : http://www.eset.com :
> > Mark Baker: > > > > | On 6/19/07, Stefan Tilkov <stefan.tilkov@...> wrote: > > | > You are right, I should have written "PUT is defined to be > > | idempotent > > | > (by the spec), POST is not - which means POST can or cannot be > > | > idempotent". Seems a little like splitting hairs, but OK. > > | > > | Not quite. POST is defined to be non-idempotent, i.e. if you see an > > | HTTP message with a POST request method, then that message is a > > | non-idempotent message. What any particular receiving implementation > > | *does* with such a message - idempotent or not - is an entirely > > | separate issue. Agreed, POST is non-idempotent, and GET is idempotent, which is one of the key aspects of self-descriptive messages, that when an intermediary sees a POST it 'knows' that it isn't idempotent, while it does 'know' that a GET is idempotent. This is not to be confused with something like old-SOAP where every request is a POST, even if the underlying semantics of that particular SOAP message was a Safe and Idempotent retrieval. That Safe and Idempotent retrieval is certainly a candidate for being done as a GET if that service were made RESTful, and thus start to take advantage of caching, etags, etc. But as long as it is tunneled inside a non-idenpotent POST it will be neither safe, nor idempotent. -joe -- Joe Gregorio http://bitworking.org
I didn't realize that Web3S was proposing adding a HTTP verb, namely UPDATE [1]. In light of the recent discussions around partial PUTs, is UPDATE something to consider? How should it differ from PATCH? Also, I probably confused here, but isn't it the same author behind both these verbs (i.e. Yaron Goland)? - Steve [1] http://intertwingly.net/blog/2007/06/18/Web3S -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
Marc de Graauw wrote: > ... > And one issue is still open: if I have some logic where I want the server to > create the URI but the logic is idempotent, what do I do? > ... I've been proposing ADDMEMBER (<http://greenbytes.de/tech/webdav/draft-reschke-http-addmember-00.html>), but was told it doesn't add anything to POST. I disagree with that (and so do others), but hadn't the energy to follow up on this one. Best regards, Julian
I think this thread getting away from the original point, which was reliability, and I think there is some importance to the issue at hand, so I'll start fresh. Everybody agrees on the basics: the PUT method is for idempotent messages, and every PUT message should be idempotent. Clients and intermediaries may count on that. POST is not defined to be idempotent, so clients and intermediaries may not count on POST messages being idempotent. Here people start to differ. Stefan Tilkov wrote: | PUT is nice because it's [supposed to be] idempotent. POST is not. | This means that a client is allowed (within REST's constraints) to | retry a PUT if it hasn't received a response. | If you have only POST - for whatever reason - you need to do | something else. Examples include http://ietfreport.isoc.org/idref/ | draft-nottingham-http-poe/ and http://www.goland.org/draft-goland- | http-reliability-00.text I wrote: | ... POST isn't necessarily non-idempotent: it may be | idempotent or not. In fact, reliability approaches such as | the ones you refer to below are | ways of making POST idempotent. Stefan, and later Alan Dean, agreed. Jan Algermissen wrote: | Actually, I'd say that every single invokation of POST is | significant from a client's POV and it explicitly expects | multiple POSTs to have distinct effects, or? | | IMO, POST *is* in fact non-idempotent...always. Per definition. and Mark Baker joined: | Not quite. POST is defined to be non-idempotent, i.e. if you see an | HTTP message with a POST request method, then that message is a | non-idempotent message. So the question is: must every POSTed message be non-idempotent? Or: if I have a idempotent message, may I use POST? The importance is in the reliability aspect. If all POST messages are non-idempotent by definition, reliability protocols such as POE [1], SOArity [2] and HTTPLR [3] (POST variant) are wrong: they make POST messages idempotent. If POST messages can be either idempotent or not, there is nothing wrong with them, nor with the idea of using a protocol on top of HTTP to make POST messages idempotent. I would like that. Joe Gregorio: | Agreed, POST is non-idempotent, and GET is idempotent, which is one | of the key aspects of self-descriptive messages, that when an | intermediary | sees a POST it 'knows' that it isn't idempotent, while it | does 'know' that | a GET is idempotent. I'd say the intermediary knows it must treat the message as if it were non-idempotent. You and Mark both agree the intermediary does not know anything about the actual message and what it does. I think RFC2616 defines three knowledge levels about messages for clients and intermediaries: 1) I know nothing (POST), and must take all possible precautions. 2) I know it's idempotent (PUT, DELETE, GET etc.) 3) I know it's safe (GET, etc.), implies 2. And I'd still like to see a quote from RFC2616 where it says "POST messages are (always) non-idempotent" or a passage which implies this. Marc de Graauw www.marcdegraauw.com [1] http://ietfreport.isoc.org/idref/draft-nottingham-http-poe/ [2] http://www.goland.org/draft-goland-http-reliability-00.text [3] http://www.dehora.net/doc/httplr/draft-httplr-01.html#rfc.section.8.2.1
Steve Bjorg wrote: > > > I didn't realize that Web3S was proposing adding a HTTP verb, namely > UPDATE [1]. In light of the recent discussions around partial PUTs, is > UPDATE something to consider? How should it differ from PATCH? Also, I > probably confused here, but isn't it the same author behind both these > verbs (i.e. Yaron Goland)? (all imho) 1) UPDATE is something to consider. 2) PATCH is supposed to have the desired semantics, so UPDATE wouldn't be needed (it's a bad name anyway, as PUT already has update semantics, the key difference is that it's *partial*). 3) No. Yaron didn't work on PATCH (yet?). Best regards, Julian
I can't believe we're discussing this. HTTP doesn't make any guarantee about the idempotence of POST. Thus a client can not rely on it, unless it has additional information. In absence of that information, it has to assume it's not idempotent. And no, there's no reason not to do idempotent things with POST, it's just *better* for everybody to use an idempotent message instead (*). Best regards, Julian (*) Such as ADDMEMBER. <ducks>
Julian Reschke wrote: > > > Marc de Graauw wrote: > > ... > > And one issue is still open: if I have some logic where I want the > server to > > create the URI but the logic is idempotent, what do I do? > > ... > > I've been proposing ADDMEMBER > (<http://greenbytes. de/tech/webdav/ draft-reschke- http-addmember- > 00.html > <http://greenbytes.de/tech/webdav/draft-reschke-http-addmember-00.html>>), > but was told it doesn't add anything to POST. I disagree with that (and > so do others), but hadn't the energy to follow up on this one. Hm, I have to take that back. ADDMEMBER of course is not itempotent, because repeating a request will lead to another resource being created. So you'll need something in the message that allows a server to detect a duplicate of a request. Best regards, Julian
Am 22.06.2007 um 11:18 schrieb Julian Reschke: > Julian Reschke wrote: > > > > > > Marc de Graauw wrote: > > > ... > > > And one issue is still open: if I have some logic where I want the > > server to > > > create the URI but the logic is idempotent, what do I do? > > > ... > > > > I've been proposing ADDMEMBER > > (<http://greenbytes. de/tech/webdav/ draft-reschke- http-addmember- > > 00.html > > <http://greenbytes.de/tech/webdav/draft-reschke-http- > addmember-00.html>>), > > but was told it doesn't add anything to POST. I disagree with > that (and > > so do others), but hadn't the energy to follow up on this one. > > Hm, I have to take that back. ADDMEMBER of course is not itempotent, > because repeating a request will lead to another resource being > created. > > So you'll need something in the message that allows a server to > detect a > duplicate of a request. There is the POE proposal by Mark Nottingham: http://www.mnot.net/drafts/draft-nottingham-http-poe-00.txt
John D. Heintz wrote: > There is nothing JSON does that XML can't (what is the data equivalent > of Turing Complete?), but JSON is much less costly (verbosity, > programming weight) than XML for those things. An interesting question. I suspect that it's very hard to come up with a language that isn't "information complete". Essentially if you have at least two states and an infinite space, anything can be encoded. I suspect the only languages that are not "information complete" would be ones with a finite number of strings. However just as Intercal is not an appropriate replacement for C, and C++ is not an appropriate replacement for Java, there are practical considerations beyond theoretical expressiveness that come into play when choosing one's languages. The information space that JSON handles well is large, but it is not nearly as large as the information space of the Web. Structured data is a crutch invented by computer scientists because they don't have computers as powerful as a human brain. Tables, maps, list, trees and more are all kludges designed to try to make some sense out of an unordered, unstructured world. XML and its tree structures are not a perfect representation of human knowledge and the information we need to encode. However, precisely because XML is less structured than maps and lists and tables, it can handle more information than can be encoded in maps and lists and tables. There are many, many examples where JSON (and other map-list data structures) becomes practically unmanageable but which XML handles without blinking. However it's not a two-way street. XML cannot only encode everything JSON can encode. It can do so practically, usefully, and efficiently. The reverse is not true. I hope one day soon there will be still less structured, more generic information description languages that can well handle the information structures XML cannot. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
* Steve Bjorg <steveb@...> [2007-06-22 07:30]: > In light of the recent discussions around partial PUTs, is > UPDATE something to consider? How should it differ from PATCH? I don’t think UPDATE appreciably differs in intended semantics from PATCH. It’s an awfully chosen name, though. FWIW, James Snell is working on reviving the PATCH draft (it’s expired). Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I like your points Harold. I would put it differently.
A language is mathematically defined as something like the following:
a syntax + a semantics [1].
Ie. you have to know how strings can be combined to form sentences.
Then you have to know how to associate meanings with those signs, how
those signs relate to the world. This simplifies a lot as I don't
want to write a book on the subject here.
XML is original with respect to other language such as Java, JSON,
etc, in that
- it defines a syntax, without defining the semantics,
- it uses URIs as name space identifiers [2]
RDF/XML gives a semantics to subset of XML documents. The linking of
RDF and XML is attractive, because RDF is a semantics without a
syntax, and XML is a syntax without semantics, and both use URI
identifiers. Whether RDF/XML is the best way to do this mapping or
not is a good question. It works close to XML writers intuitions, but
perhaps it is not close enough. There are other proposals on the
table such as TRiX that make the syntax reflect more closely the
semantics.
Henry
PS. I add a few comments to your text below
[1] see the image on the blog http://blogs.sun.com/bblfish/entry/
answers_to_duck_typing_done
"anwsers to duck typing done right" which shows graphically how
syntax and semantics are related
[2] see "Duck Typing Done right"
http://blogs.sun.com/bblfish/entry/duck_typing_done_right
for an explanation as to why using URIs is a big improovement
over what other languages have to offer
On 22 Jun 2007, at 13:57, Elliotte Harold wrote:
> John D. Heintz wrote:
>
>> There is nothing JSON does that XML can't (what is the data
>> equivalent
>> of Turing Complete?), but JSON is much less costly (verbosity,
>> programming weight) than XML for those things.
>
>
> An interesting question. I suspect that it's very hard to come up
> with a language that isn't "information complete". Essentially if
> you have at least two states and an infinite space, anything can be
> encoded. I suspect the only languages that are not "information
> complete" would be ones with a finite number of strings.
>
> However just as Intercal is not an appropriate replacement for C,
> and C++ is not an appropriate replacement for Java, there are
> practical considerations beyond theoretical expressiveness that
> come into play when choosing one's languages. The information space
> that JSON handles well is large, but it is not nearly as large as
> the information space of the Web. Structured data is a crutch
> invented by computer scientists because they don't have computers
> as powerful as a human brain. Tables, maps, list, trees and more
> are all kludges designed to try to make some sense out of an
> unordered, unstructured world.
That is taking the problem from the wrong angle I think. Tables,
maps, lists, etc, are some well known structures that are easy to
compute on. There are many others. The world is structured in many ways.
The main problem with JSON is that it does not natively support URIs
(see my [2]) as a result it is not optimally designed for Resource
Oriented Architecture, which we need when working in a global
information space such as the web.
> XML and its tree structures are not a perfect representation of
> human knowledge and the information we need to encode. However,
> precisely because XML is less structured than maps and lists and
> tables, it can handle more information than can be encoded in maps
> and lists and tables. There are many, many examples where JSON (and
> other map-list data structures) becomes practically unmanageable
> but which XML handles without blinking. However it's not a two-way
> street. XML cannot only encode everything JSON can encode. It can
> do so practically, usefully, and efficiently. The reverse is not true.
You can't have it both ways. XML is a tree, and it is less
structured? Which do you want. XML is Markup Language. That's the
essence of it. XML can encode everything you are going to want to
say, since it is a possible encoding of RDF, and RDF allows you to
say pretty much everything, including absurdities.
> I hope one day soon there will be still less structured, more
> generic information description languages that can well handle the
> information structures XML cannot.
The thing to remember is that XML is a syntax, not a language. It's a
way to markup text documents. The main use case of the language was
to start of with text documents that make sense as written and mark
up information in them. This is not always what one wants to do.
Sometimes one has data in a database that is not in a document
format, but just a bunch of relations, that one wants to make available.
If you are going to look at generality of structures, I'd venture
that relations are the most general structure in existence. Relating
two things or a thing and it's properties is pretty much the most
basic thing you can do. Everything else: trees, tables, etc, can be
built out of that.
Henry
> --
> Elliotte Rusty Harold elharo@...
> Java I/O 2nd Edition Just Published!
> http://www.cafeaulait.org/books/javaio2/
> http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/
> cafeaulaitA/
On 6/22/07, Marc de Graauw <marc@...> wrote: > So the question is: must every POSTed message be non-idempotent? Or: if I > have a idempotent message, may I use POST? You're begging the question there, because there's an implicit assumption that message idempotence (or safety) is independent of the request semantic. It isn't, it's entirely dependent upon it. With all due respect to Walden above, understanding the difference between definition in terms of message semantics or server behaviour is **critical** to understanding this. Perhaps it's the name "POST" that's confusing. So, consider methods called "ADD", or "INSERT", or "ANNOTATE". Hopefully you would agree that all of those are non-idempotent (assuming their english language definitions). Therefore, any message sent with those request methods is non-idempotent. > The importance is in the reliability aspect. If all POST messages are > non-idempotent by definition, reliability protocols such as POE [1], SOArity > [2] and HTTPLR [3] (POST variant) are wrong: they make POST messages > idempotent. They don't, AFAICT. They use hypermedia to coordinate an idempotent result. Mark.
Robert Sayre wrote: > On 6/21/07, John Panzer <jpanzer@...> wrote: > >> Robert Sayre wrote: >> >>> Again, AtomPub's consensus doesn't matter. What you're saying is that >>> there's no way for an Atom client to decide that an entry doesn't need >>> a summary anymore. Broken. >>> >> PUT ... <summary/> ... >> > > Not convincing. Try this: How do I remove a category, if there are > three categories listed? > You'd PUT the whole representation, replacing the entire entry. This always works. It also works for removing the summary. Servers which support categories can't reasonably do partial PUTs I think. Again, I hope that a better way (or ways) will come out of the PATCH revival effort. Clearly _something_ better is needed. -John
On 6/22/07, Julian Reschke <julian.reschke@...> wrote: > Mark Baker wrote: > > Perhaps it's the name "POST" that's confusing. So, consider methods > > called "ADD", or "INSERT", or "ANNOTATE". Hopefully you would agree > > that all of those are non-idempotent (assuming their english language > > definitions) . Therefore, any message sent with those request methods > > is non-idempotent. > > Sorry, you're loosing me here. > > For instance, I can easily imagine a definition of an ANNOTATE method, > that, when a request is repeated, results in exactly the same server state. The resulting state of the server is immaterial. The message is non-idempotent because ANNOTATE is a non-idempotent action (see another comment below about ANNOTATE). > > They don't, AFAICT. They use hypermedia to coordinate an idempotent result. > > They make a specific set of POST requests idempotent. If you mean the series of POE requests and responses (including GETs) as a whole, I agree. I don't believe that the POST request in that exchange is idempotent though (in case you were suggesting that). BTW, as it relates to ANNOTATE above, one could say that a specific sequence of ANNOTATE requests are idempotent. But as with POST, I don't believe that any individual request is idempotent. > BTW: where does hypermedia come into play here? At least in POE it's > just a new HTTP header. Right. But its (POE-Links) value is a list of URIs, hence hypermedia ... although I suppose the link in the form is the authoritative one. Ok, whatever. 8-) Mark.
[ Attachment content not displayed ]
Mark Baker wrote: > On 6/22/07, Julian Reschke <julian.reschke@...> wrote: >> Mark Baker wrote: >> > Perhaps it's the name "POST" that's confusing. So, consider methods >> > called "ADD", or "INSERT", or "ANNOTATE". Hopefully you would agree >> > that all of those are non-idempotent (assuming their english language >> > definitions) . Therefore, any message sent with those request methods >> > is non-idempotent. >> >> Sorry, you're loosing me here. >> >> For instance, I can easily imagine a definition of an ANNOTATE method, >> that, when a request is repeated, results in exactly the same server >> state. > > The resulting state of the server is immaterial. The message is > non-idempotent because ANNOTATE is a non-idempotent action (see > another comment below about ANNOTATE). RFC2616 defines idempotence in terms of side-effects (see <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.9.1.2>). Are you saying that when RFC2616 talks about "side effects", it's not about the server state? > .. >> > They don't, AFAICT. They use hypermedia to coordinate an idempotent >> result. >> >> They make a specific set of POST requests idempotent. > > If you mean the series of POE requests and responses (including GETs) > as a whole, I agree. I don't believe that the POST request in that > exchange is idempotent though (in case you were suggesting that). I'm not sure what you're referring to. POE tells us how a client can discover that a resource is a POE resource. POST requests on PE resources *are* idempotent. What am I missing? > ... Best regards, Julian
[ Attachment content not displayed ]
Elliotte Harold wrote: > ... Structured data is a crutch invented by computer scientists > because they don't have computers as powerful as a human brain. Tables, > maps, list, trees and more are all kludges designed to try to make some > sense out of an unordered, unstructured world. anarchist > ...There are many, many examples where JSON (and other map-list > data structures) becomes practically unmanageable but which XML handles > without blinking. I don't doubt you, but can you provide some concrete examples, rather than just claim there are "many, many" of them? -- Patrick Mueller http://muellerware.org
Julian Reschke <julian.reschke@...> writes: > RFC2616 defines idempotence in terms of side-effects (see > <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.9.1.2>). Are > you saying that when RFC2616 talks about "side effects", it's not about > the server state? The side-effect must not be about the server state otherwise it is a very unreasonable restriction. For example, logging a GET request violates that interpretation of idempotency because the server state changes (it now has a new log entry). ==> GET /server-log <== 200, 500 log entries ==> GET /some_url <== 200 ==> GET /server-log <== 200, 501 log entries Server state changes? Yes. Idempotency violated? No. YS
On 6/22/07, Julian Reschke <julian.reschke@...> wrote: > RFC2616 defines idempotence in terms of side-effects (see > <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.9.1.2>). Are > you saying that when RFC2616 talks about "side effects", it's not about > the server state? The spec is a bit split-brained in this respect, as you'd expect with so many editors. But really what I'm talking about is this oft-quoted (though perhaps not quite as often as it need be) bit; Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them. Here's the equivalent of that for POST & idempotency; Naturally, it is not possible to ensure that the server behaves non-idempotently as a result of performing a POST request; in fact, some resources consider that a feature. The important distinction here is that the user did not request the idempotent behaviour so therefore cannot be held accountable for it. Here's the equivalent of that for PUT & idempotency; Naturally, it is not possible to ensure that the server behaves idempotently as a result of performing a PUT request; in fact, some resources consider that a feature. The important distinction here is that the user did not request the non-idempotent behaviour so therefore cannot be held accountable for it. Follow? > >> > They don't, AFAICT. They use hypermedia to coordinate an idempotent > >> result. > >> > >> They make a specific set of POST requests idempotent. > > > > If you mean the series of POE requests and responses (including GETs) > > as a whole, I agree. I don't believe that the POST request in that > > exchange is idempotent though (in case you were suggesting that). > > I'm not sure what you're referring to. > > POE tells us how a client can discover that a resource is a POE > resource. POST requests on PE resources *are* idempotent. > > What am I missing? Just the same distinction between message semantics and server behaviour I've been harping on about in this thread 8-) To sum up, all POST requests are non-idempotent, but some servers' processing of a POST request may be idempotent. Sequences of messages involving POST requests may also be idempotent (e.g. POE). Mark.
Mark Baker wrote: > > > On 6/22/07, Marc de Graauw <marc@marcdegraauw. com > <mailto:marc%40marcdegraauw.com>> wrote: > > So the question is: must every POSTed message be non-idempotent? Or: if I > > have a idempotent message, may I use POST? > > You're begging the question there, because there's an implicit > assumption that message idempotence (or safety) is independent of the > request semantic. It isn't, it's entirely dependent upon it. > > With all due respect to Walden above, understanding the difference > between definition in terms of message semantics or server behaviour > is **critical** to understanding this. > > Perhaps it's the name "POST" that's confusing. So, consider methods > called "ADD", or "INSERT", or "ANNOTATE". Hopefully you would agree > that all of those are non-idempotent (assuming their english language > definitions) . Therefore, any message sent with those request methods > is non-idempotent. Sorry, you're loosing me here. For instance, I can easily imagine a definition of an ANNOTATE method, that, when a request is repeated, results in exactly the same server state. > > The importance is in the reliability aspect. If all POST messages are > > non-idempotent by definition, reliability protocols such as POE [1], > SOArity > > [2] and HTTPLR [3] (POST variant) are wrong: they make POST messages > > idempotent. > > They don't, AFAICT. They use hypermedia to coordinate an idempotent result. They make a specific set of POST requests idempotent. BTW: where does hypermedia come into play here? At least in POE it's just a new HTTP header. Best regards, Julian
On Jun 22, 2007, at 6:39 AM, Henry Story wrote: > I like your points Harold. I would put it differently. > > A language is mathematically defined as something like the following: > a syntax + a semantics [1]. > A language is a grammar. If by semantics, you mean the rules for constructing valid sentences in the language (i.e. tree grammar, regular grammar, etc.) then I agree, but if by semantics you mean "meaning," then we're heading into deep waters. Languages don't have meanings. Constructs in a language MAY have a meaning. So, these two sets (all linguistically correct sentences and all meaningful sentences) have all different size and shouldn't be mixed. > XML is original with respect to other language such as Java, JSON, > etc, in that > - it defines a syntax, without defining the semantics, This is wrong, regardless of the meaning of semantics used previously. Clearly XML has a grammar (referring to the first meaning of "semantics"; the second meaning doesn't even apply). > - it uses URIs as name space identifiers [2] > > RDF/XML gives a semantics to subset of XML documents. The linking of > RDF and XML is attractive, because RDF is a semantics without a > syntax, and XML is a syntax without semantics, and both use URI > identifiers. Whether RDF/XML is the best way to do this mapping or > not is a good question. It works close to XML writers intuitions, but > perhaps it is not close enough. There are other proposals on the > table such as TRiX that make the syntax reflect more closely the > semantics. I haven't checked out TRiX. BTW, does RDF support hypergraphs (i.e. edges that connect many nodes)? Mapping hypergraphs to graphs is a bit tedious. > > Henry > > PS. I add a few comments to your text below > > [1] see the image on the blog http://blogs.sun.com/bblfish/entry/ > answers_to_duck_typing_done > "anwsers to duck typing done right" which shows graphically how > syntax and semantics are related > [2] see "Duck Typing Done right" > http://blogs.sun.com/bblfish/entry/duck_typing_done_right > for an explanation as to why using URIs is a big improovement > over what other languages have to offer - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
On 22 Jun 2007, at 20:11, Steve Bjorg wrote: > On Jun 22, 2007, at 6:39 AM, Henry Story wrote: > >> I like your points Harold. I would put it differently. >> >> A language is mathematically defined as something like the following: >> a syntax + a semantics [1]. >> > A language is a grammar. If by semantics, you mean the rules for > constructing valid sentences in the language (i.e. tree grammar, > regular grammar, etc.) then I agree, but if by semantics you mean > "meaning," then we're heading into deep waters. Languages don't > have meanings. Constructs in a language MAY have a meaning. So, > these two sets (all linguistically correct sentences and all > meaningful sentences) have all different size and shouldn't be mixed. No by semantics I do not meant the rules for constructing valid sentences, and nor do most people. Constructing valid sentences is the domain of syntax. Just read any book on mathematical logic. Syntax deals with how you combine strings to form valid sentences. From "Introduction to Elementary Mathematical Logic" by Abram Aronovich Stolyar (which I found on Google Books) [[ Syntax studies the elements of the structure of a formalised language without regard to what it expresses. Semantics studies the elements and structure of a formalised language in connection with its meaningful interpretation (in connection with what it expresses in extralinguistic reality). ]] >> XML is original with respect to other language such as Java, JSON, >> etc, in that >> - it defines a syntax, without defining the semantics, > This is wrong, regardless of the meaning of semantics used > previously. Clearly XML has a grammar (referring to the first > meaning of "semantics"; the second meaning doesn't even apply). You need to go back and look at what people mean by semantics. >> - it uses URIs as name space identifiers [2] >> >> RDF/XML gives a semantics to subset of XML documents. The linking of >> RDF and XML is attractive, because RDF is a semantics without a >> syntax, and XML is a syntax without semantics, and both use URI >> identifiers. Whether RDF/XML is the best way to do this mapping or >> not is a good question. It works close to XML writers intuitions, but >> perhaps it is not close enough. There are other proposals on the >> table such as TRiX that make the syntax reflect more closely the >> semantics. > I haven't checked out TRiX. > > BTW, does RDF support hypergraphs (i.e. edges that connect many > nodes)? Mapping hypergraphs to graphs is a bit tedious. I have not played with hypergraphs, so I can't tell from experience, though theory says it is possible. But going from what wikipedia says about them, namely that "While graph edges are pairs of nodes, hyperedges are arbitrary sets of nodes, and can therefore contain an arbitrary number of nodes." That's ok. You can have names for sets. Say you call the set of cats and dogs <http://eg.com/cd> . You can say :myCat a <http://eg.com/cd> . so the instanceof relation (abbreviated to a above) is a relation that relates things to arbitrary graphs. Henry >> >> Henry >> >> PS. I add a few comments to your text below >> >> [1] see the image on the blog http://blogs.sun.com/bblfish/entry/ >> answers_to_duck_typing_done >> "anwsers to duck typing done right" which shows graphically how >> syntax and semantics are related >> [2] see "Duck Typing Done right" >> http://blogs.sun.com/bblfish/entry/duck_typing_done_right >> for an explanation as to why using URIs is a big improovement >> over what other languages have to offer > > - Steve > > -------------- > Steve G. Bjorg > http://www.mindtouch.com > http://www.opengarden.org >
[ Attachment content not displayed ]
On Jun 22, 2007, at 12:18 PM, Henry Story wrote: > On 22 Jun 2007, at 20:11, Steve Bjorg wrote: > >> On Jun 22, 2007, at 6:39 AM, Henry Story wrote: >> >>> I like your points Harold. I would put it differently. >>> >>> A language is mathematically defined as something like the >>> following: >>> a syntax + a semantics [1]. >>> >> A language is a grammar. If by semantics, you mean the rules for >> constructing valid sentences in the language (i.e. tree grammar, >> regular grammar, etc.) then I agree, but if by semantics you mean >> "meaning," then we're heading into deep waters. Languages don't >> have meanings. Constructs in a language MAY have a meaning. So, >> these two sets (all linguistically correct sentences and all >> meaningful sentences) have all different size and shouldn't be >> mixed. > > No by semantics I do not meant the rules for constructing valid > sentences, and nor do most people. Constructing valid sentences is > the domain of syntax. Just read any book on mathematical logic. > Syntax deals with how you combine strings to form valid sentences. > From "Introduction to Elementary Mathematical Logic" by Abram > Aronovich Stolyar (which I found on Google Books) Euh, I was just trying to figure out what you were meaning by giving you the benefit of the doubt. I have heard worse atrocities than referring to syntactic rules as semantics, but I digress. We are on the same page here. > >>> XML is original with respect to other language such as Java, JSON, >>> etc, in that >>> - it defines a syntax, without defining the semantics, >> This is wrong, regardless of the meaning of semantics used >> previously. Clearly XML has a grammar (referring to the first >> meaning of "semantics"; the second meaning doesn't even apply). > > You need to go back and look at what people mean by semantics. I still think your statement is wrong. Said differently, why do you state that XML has no semantics and JSON does? >> BTW, does RDF support hypergraphs (i.e. edges that connect many >> nodes)? Mapping hypergraphs to graphs is a bit tedious. > > I have not played with hypergraphs, so I can't tell from > experience, though theory says it is possible. But going from what > wikipedia says about them, namely that "While graph edges are pairs > of nodes, hyperedges are arbitrary sets of nodes, and can therefore > contain an arbitrary number of nodes." > > That's ok. You can have names for sets. Say you call the set of > cats and dogs <http://eg.com/cd> . > > You can say > > :myCat a <http://eg.com/cd> . > > so the instanceof relation (abbreviated to a above) is a relation > that relates things to arbitrary graphs. Cool! At some point, I really need to start playing with RDF and assimilate its properties further. - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
On 22 Jun 2007, at 22:40, Steve Bjorg wrote: >> >>>> XML is original with respect to other language such as Java, JSON, >>>> etc, in that >>>> - it defines a syntax, without defining the semantics, >>> This is wrong, regardless of the meaning of semantics used >>> previously. Clearly XML has a grammar (referring to the first >>> meaning of "semantics"; the second meaning doesn't even apply). >> >> You need to go back and look at what people mean by semantics. > > I still think your statement is wrong. Mhh? > Said differently, why do you state that XML has no semantics and > JSON does? I may be wrong with JSON having semantics. But this make the problem of JSON just more acute. As I understand JSON is a way of exchanging serialised JavaScript objects. As a result it would inherit the semantics of JavaScript. It is popular because it is easy for Javascript to produce these objects, and easy for it to consume them. There must therefore be some meaning to the things sent over the wire. The string or the integer objects are well understood I imagine to have certain properties. But it could be that JSON has no explicit semantics. This means that the consumer of the JSON will impart the implicit semantics on the JSON. Ie: JSON is creating a tight binding between the consumer of the javascript code and the server producing it. This is bad because it means that the meaning of the JSON list will be dependent on who publishes it. So if I receive some code, and pass it on, or put it on a different server, the JSON will have a different meaning. The server could produce the same JSON, but change the client, and thereby the meaning of the JSON could change completely. This is the problem of working without URI namespaces. XML has no explicit semantics, but RDF gives it one. The URI namespaces mean that the direct meaning of the document are understood everywhere to be the same. Which is required for an open space of data. Henry
On 6/22/07, Nick Gall <nick.gall@...> wrote: > > It doesn't help because it begs the question "set how much of the state of the targeted > resource"? Both replacement semantics and merge semantics fit this description. To > clear up the ambiguity, it should say either: The spec is a little curt, but it's not ambiguous. PUT has replacement semantics: the entity in the request is to be stored under the URI. You can contrast this with PATCH, defined in RFC 2068, section 19.6.1.1. There, we learn that a PATCH request will include "include sufficient information to allow the server to recreate the changes necessary to convert the original version of the resource to the desired version." That's a merge. This thread has some great examples of PUT-to-merge falling down pretty badly. You can add them to the Astoria example. Don't take arguments on WG mailing lists too seriously if you're trying to figure out the right answer. Those lists are filled with people describing the behavior of the products they've already written. You get resolutions like "the consensus is that there's no consensus" when it's not politically acceptable to clearly classify certain implementations as non-conformant. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
: With all due respect to Walden above, understanding the difference : between definition in terms of message semantics or server behaviour : is **critical** to understanding this. Well, with all due respect to you, Mark, while I agree with your assertion above, and also before, I thought it was off target for the problem as stated. The message semantics we're discussing have to do with changes to resource state, and the protocol we're discussing has a server in the loop as the agent of that change. So it's unavoidable that the meanings (you say semantics) are being written out operationally in terms of server behavior. That doesn't bother me, and from what I observe it's rarely the cause of confusion. What caused confusion in this instance was trying to interpret "non-idempotent" as a constraint: f(f(x) != f(x). That's a logical negation of idempotence, whereas the authors of RFC2616 had in mind something different: the absence of any constraint at all. (Replace the "!=" with an operator which means "no relation".) That's a pretty serious confusion if you end up thinking that you must design a system such that if it POSTs twice, the result had better not be the same as a single POST. That would make about as much sense as refusing to drive at 40MPH as soon as you reach the "end speed zone" sign. Walden
[ Attachment content not displayed ]
[ Attachment content not displayed ]
Mike Dierken wrote: > > > Again, I hope that a better way (or ways) will come out of the > PATCH revival effort. Clearly _something_ better is needed. > > What was it about POST that fails to help here (partial update of a > resource)? It doesn't fail, but it's less good than a more specific verb. Also, it makes it difficult to use POST to "add a subordinate resource" to a resource when you're already using POST to merge. You can use MIME types to disambiguate, but it just seems less clear to me. But yeah, if nothing better was available, I'd use POST.
On 6/23/07, Nick Gall <nick.gall@...> wrote: > > Robert, we've both seen these arguments raised before and their > counterarguments: The counterarguments aren't technical. Have you noticed that? > > PUT is only "requests that the enclosed entity be stored". It does not say > MUST. Yes, specs are harder to read these days because their authors must guard against abusive interpretations like this. > If the resource exists, "the enclosed entity SHOULD be considered as a > modified version". Again, only SHOULD, not MUST and furthermore is says > "modified" not "replacement". Read the definition of SHOULD. > The statement that virtually all bets are off concerning the semantics of > PUT: "HTTP/1.1 does not define how a PUT method affects the state of an > origin server." That's not what that sentence means. Read my first post in this thread, where I explain that servers can do anything they want and claim success (2xx). That is quite different from disputing the meaning of the message. > Citing PATCH doesn't help, because PATCH was deprecated due to lack of use, > which could be read as evidence that merge semantics via PUT was good > enough. That's a fallacious argument. PATCH was deprecated because it wasn't implemented. Neither was PUT, tbh. > My conclusion is that the spec is ambiguous regarding PUT replacement vs. > merge semantics and there is no consensus in the relevant communities on > what the semantics should be. Your conclusion is wrong. My conclusion is that it takes quite a bit of cognitive dissonance to give a PUT message merge semantics. This is rest-discuss, where we like uniform interfaces. The one concrete example in this thread has shown that a PUT-merge works when you edit some Atom elements, but not others. Oops! -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
On 6/23/07, Walden Mathews <waldenm@...> wrote: > > That's a pretty serious confusion if you end up thinking that you > must design a system such that if it POSTs twice, the result had > better not be the same as a single POST. +1
It seems to me that at the heart of the problem is hierarchical context. Taking the simple case first. Let's consider if the method semantics are ambiguous if the resource is a standalone file such as roots.txt GET /robots.txt ... returns the whole file DELETE /robots.txt ... removes the whole file POST /robots.txt ... up to the server, but could be a simple append to the end of the file PUT /robots.txt ... replaces the whole file I hope that the above is a trivial enough example to be uncontentious. I would think that everyone can agree that the semantics are not ambiguous for robots.txt Now let's look at a hierarchy: /fruit /fruit/apples /fruit/apples/granny-smith /fruit/apples/golden-delicious and let's consider the semantics of acting on /fruit/apples GET /fruit/apples ... returns a representation of all apples DELETE /fruit/apples ... removes all apple representations POST /fruit/apples ... up to the server, could could simply be an append of a new type of apple PUT /fruit/apples ... surely the only semantic that makes sense is to replace all apple representations? The semantic isn't partial. If you want to implement a partial amendment of /fruit/apples then surely you should use POST, which *is* ambiguous, because PUT is the full monty. Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
Mark Baker wrote: > > > On 6/22/07, Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> wrote: > > RFC2616 defines idempotence in terms of side-effects (see > > <http://greenbytes. de/tech/webdav/ rfc2616.html# rfc.section. 9.1.2 > <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.9.1.2>>). Are > > you saying that when RFC2616 talks about "side effects", it's not about > > the server state? > > The spec is a bit split-brained in this respect, as you'd expect with > so many editors. But really what I'm talking about is this oft-quoted > (though perhaps not quite as often as it need be) bit; > > Naturally, it is not possible to ensure that the server does not > generate side-effects as a result of performing a GET request; in > fact, some dynamic resources consider that a feature. The important > distinction here is that the user did not request the side-effects, > so therefore cannot be held accountable for them. Yes. > Here's the equivalent of that for POST & idempotency; > > Naturally, it is not possible to ensure that the server behaves > non-idempotently as a result of performing a POST request; in > fact, some resources consider that a feature. The important > distinction here is that the user did not request the idempotent > behaviour so therefore cannot be held accountable for it. No, I don't agree this is equivalent. For POST, the client just doesn't know (without extra information). So it's not requesting an non-idempotent information, it's requesting an operation that *may* be non-idempotent. > ... > Follow? Nope. > ... > Just the same distinction between message semantics and server > behaviour I've been harping on about in this thread 8-) > > To sum up, all POST requests are non-idempotent, but some servers' > processing of a POST request may be idempotent. Sequences of messages > involving POST requests may also be idempotent (e.g. POE). > > Mark. It seems to me that all we have is terminology confusion. You seem to deny that "idempotent" applies to a HTTP request, but want us to talk about the processing of a request. Best regards, Julian
Mike Dierken wrote: > ... > A full update via PUT has the possibility of being cached without > requiring the new version of the resource to be sent in the response, > whereas a partial update via PUT does not allow this possibility. Caching the request body of PUT will have surprising results when the server doesn't store it octet by octet. Are you aware of any real-world intermediates doing this? > Is that an important aspect in deciding on what method to use: > PUT-with-partial- content, PATCH or POST? I wouldn't think so. IMHO all an intermediate can do here safely is invalidating cache entries for the Request-URI (well, except for POST); and even that is only helpful if we're talking about a landscape where all requests go through a single intermediate. Best regards, Julian
Robert Sayre wrote: > ... > > PUT is only "requests that the enclosed entity be stored". It does > not say > > MUST. > > Yes, specs are harder to read these days because their authors must > guard against abusive interpretations like this. +1 There is a group of people claiming that any normative statement in an IETF spec needs to use RFC2119 keywords. This is not true. Read RFC2119. Keep in mind that there are full internet standards such as RFC3986 (URI) which don't even use RFC2119 keywords at all. My understanding is (and I'm sure there'll be disagreement), that anything in a spec is normative unless stated otherwise (such as by labeling it an example). > ... > > The statement that virtually all bets are off concerning the semantics of > > PUT: "HTTP/1.1 does not define how a PUT method affects the state of an > > origin server." > > That's not what that sentence means. Read my first post in this > thread, where I explain that servers can do anything they want and > claim success (2xx). That is quite different from disputing the > meaning of the message. But I do agree that this sentence is causing lots of confusion, and that it is one of those things that RFC2616bis should improve. > > Citing PATCH doesn't help, because PATCH was deprecated due to lack > of use, > > which could be read as evidence that merge semantics via PUT was good > > enough. > > That's a fallacious argument. PATCH was deprecated because it wasn't > implemented. Neither was PUT, tbh. I do agree it's a bad argument, but I'll have to say that PUT *is* widely implemented. AFAICT, PATCH was removed because the transition from RFC2068 to RFC2616 moved the specification from "Proposed Standard" to "Draft Standard", and at that time, there wasn't enough data about PATCH being implemented as stated. > > My conclusion is that the spec is ambiguous regarding PUT replacement vs. > > merge semantics and there is no consensus in the relevant communities on > > what the semantics should be. > > Your conclusion is wrong. My conclusion is that it takes quite a bit > of cognitive dissonance to give a PUT message merge semantics. This is > rest-discuss, where we like uniform interfaces. The one concrete > example in this thread has shown that a PUT-merge works when you edit > some Atom elements, but not others. Oops! Agreed. It seems that people are obsessed with the theory that the verbs defined in RFC2616 must be sufficient for anything. Yes, they are, but only if you use POST as fallback. Please don't start abusing PUT just because PATCH isn't there yet, and you don't like to use POST instead. Best regards, Julian
Josh Sled wrote: > Not quite. JSON is the subset of JavaScript that is the simple notation for > representing structured data. That contains strings, numbers, booleans, and > lists and maps thereof. If you look around, you'll notice that pretty much > every programming language has these constructs, and that is not by > coincidence. Depends on what you mean by a programming language, and what you mean by "has". SQL doesn't have lists or maps, though it can imitate them with tables. Classic C doesn't have booleans, though it can imitate them with ints. Java doesn't have native lists or maps though it has these in the standard library. Some other languages do have these constructs. The point is that you can always hack together some representation of lists, maps, and so forth in any reasonably powerful language. It's not a question of what you can do, but what's convenient to do. The interesting thing about XML is that, unlike most other languages and data representation formats, it doesn't stop or even start with lists and maps and structured data. It's about trees and semistructured data. That's an incredibly powerful and useful generalization, but it's so radical that ten years in a lot of developer still haven't left the list/map playground. You can get a lot done with lists and maps, but there's even more information in the world that doesn't fit into lists and maps in any reasonable way. We aren't done yet. Semistructured is better than structured, but we won't have achieved real info nirvana until we learn how to manage unstructured data. When that is achieved, computer science will have finally grown up. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Patrick Mueller wrote: > I don't doubt you, but can you provide some concrete examples, rather > than just claim there are "many, many" of them? > Hint: you're reading one now. Hint: Open your web browser Hint: open Microsoft Word Most of the world's information is *not* in relational databases. It's not in databases of any kind. It's locked up in books, Word documents, PowerPoint presentation (well, I suppose most of those are technically information-free :-) ) and so on. You get the idea. Database practitioners can be so myopic that they don't even count this sort of content as information, but it completely dwarfs the amount of information we have carefully structured in relational databases and data warehouses. XML can handle some of that very nicely. Relational databases and JSON can't. XML can't handle all of it. (Don't forget video and audio.) But it can handle more of it than more structured formats can. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Elliotte Harold wrote: > An interesting question. I suspect that it's very hard to come up with a > language that isn't "information complete". Essentially if you have at > least two states and an infinite space, anything can be encoded. I > suspect the only languages that are not "information complete" would be > ones with a finite number of strings. Actually I now recall, you don't even need two states. One will do. You simply count the number of marks made. Unary data encoding is not very efficient, but as long as you have infinite space, it's fully able to express anything you can express in binary. I do wonder why we've limited data storage to binary systems. It seems moving to a quaternary or higher encoding would exponentially increase the amount of data you could store on the space, though there'd be some increase in complexity of the reading and writing hardware. However off the top of my head the only system I can remember that used more than a binary encoding would be analog modems with higher than 2400 bps. Perhaps quantum computing will finally break us out of the binary ghetto. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
[ Attachment content not displayed ]
On 6/23/07, Julian Reschke <julian.reschke@...> wrote: > It seems to me that all we have is terminology confusion. You seem to > deny that "idempotent" applies to a HTTP request, but want us to talk > about the processing of a request. Ack. Obviously I haven't been communicating my position very well. I emphatically *agree* that idempotency is a quality of an HTTP request. In fact, my argument depends on that being the case because I claim that just by examining the request (i.e. not by waiting to see what happens on the server), one can determine whether it's idempotent or not. A server's behaviour can also be idempotent or not, of course. But that's completely independent of the idempotency of the message it processes. Maybe we're in violent agreement? Mark.
On 6/23/07, Nick Gall <nick.gall@...> wrote: > On 6/23/07, Robert Sayre <sayrer@...> wrote: > > On 6/23/07, Nick Gall <nick.gall@...> wrote: > > > If the resource exists, "the enclosed entity SHOULD be considered as a > > > modified version". Again, only SHOULD, not MUST and furthermore is says > > > "modified" not "replacement". > > > > Read the definition of SHOULD. > > > > You didn't address the spec saying "modified" not "replaced". OK. The spec says " modified version of the one residing on the origin server". Let's say I have a diff I want to apply. Is the enclosed entity, the diff file itself, a "modified version" of what's on the server? No. It could be combined with the one residing on the origin server to produce a new version, but that's the definition of PATCH, isn't it? <http://diveintomark.org/archives/2004/08/16/specs> -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
[ Attachment content not displayed ]
On 6/23/07, Alan Dean <alan.dean@...> wrote: > PUT /fruit/apples > > ... surely the only semantic that makes sense is to replace all apple > representations? The semantic isn't partial. > The effect of "PUT /fruit/apples" on "/fruit/apples/gala" is not knowable in the realm of RFC2616, and neither it should be. The hierarchy is defined completely within the context of application design. It is up to the designer of the application to: * define "PUT /fruit/apples" as complete replacement of all apple representation * not allow PUT on "/fruit/apples" * devise another scheme of side-effects of "PUT /fruit/apples" :DG<
Mark Baker: | I emphatically *agree* that idempotency is a quality of an HTTP | request. In fact, my argument depends on that being the case because | I claim that just by examining the request (i.e. not by waiting to see | what happens on the server), one can determine whether it's idempotent | or not. Aren't you defending the (circular) position that a message is non-idempotent because the chosen method (POST) is non-idempotent, and we must choose POST because the message semantics are non-idempotent? What about this case: I set up a web service where my friends can create pages on my web server for themselves. As 'friend' qualifies anybody who has taken an inordinate amount of time to correct my many misunderstandings and shortcomings, and given your effort on this list, you certainly qualify. So you can create http://www.marcdegraauw.com/friend/markbaker, and enclose in the body some comment which appears on the page. My server however creates the page, which will include links to all my blog entries about you and more. You can't do: PUT http://www.marcdegraauw.com/friend/markbaker, because RFC2616 says: "the URI in a PUT request identifies the entity enclosed with the request" and http://www.marcdegraauw.com/friend/markbaker does not identify the comment, but the page-to-be-created. You can do: POST http://www.marcdegraauw.com/friend/ with 'markbaker' and the comment in the body. But if you POST once, twice or N times, my server will end up in exactly the same state: once a friend, always a (=1) friend. Sounds pretty idempotent to me. Marc de Graauw www.marcdegraauw.com
> You didn't address the spec saying "modified" not "replaced". It does say "version" which to me indicates fully usable in place of an earlier version.
> If I were still coding, the way I would implement legal what would effective be merge semantics > with PUT is to simply design a media type (based on XML most likely) that enabled a complete representation, > just in diff format. > A given resource is allowed to use many representations to communicate its state. > Why not just provide a diff based state representation? Hmm, I would never equate 'diff' with 'complete representation', even if it did reference the base copy which was used to compute the diff. But that's just me. > I believe this diff representation of foo qualifies as "a 'modified version' of what's on the > server". What's makes it so is the fact that it contains the URL to the complete current state. Well, aside from not making the representation self-contained (which isn't necessarily a requirement) the URL would have to reference the particular version which this diff was based on.
[ Attachment content not displayed ]
[ Attachment content not displayed ]
On 6/23/07, Nick Gall <nick.gall@...> wrote: > > On 6/23/07, Mike Dierken <dierken@...> wrote: > > > > > > > > You didn't address the spec saying "modified" not "replaced". > > It does say "version" which to me indicates fully usable in place of an earlier version. > > > > > Many version control systems save previous versions as only the delta from the prior version, so I don't think the mention of "version" indicates a complete version vs. a delta version. > > For those of you who did not follow my trailfire of links to the arguments on both sides of PUT semantics, Roy Fielding appears to agree that PUT's semantics are NOT required to be replacement semantics: > > An AtomPP protocol exchange may result in all kinds of funky > behavior on the server, none of which matters to AtomPP. Just > like HTTP. And yes, it was designed that way *on purpose*, in spite > of the fact that some people have very different notions of what > makes a good application protocol. > > It is only when we talk about specific applications of AtomPP, > such as an authoring interface to a corporate blog, that we can > say anything about the anticipated state change on the server. > SUCH INFORMATION DOES NOT NEED TO BE IN THE PROTOCOL SPECIFICATION. > [emphasis added] > > > Likewise, "it is only when we talk about specific applications of PUT...that we can say anything about the anticipated state change on the server." I am not saying that Roy's word is law on the issue, but the fact that he does NOT agree that PUT MUST or even SHOULD have replacement semantics is, for me, definitive evidence that the spec is ambiguous. Nick, Forgive me for nit-picking, but I think that Roy was saying that the HTTP spec doesn't need to be prescriptive, but the the application protocol layered on top *does*: "It is only when we talk about specific applications of AtomPP, such as an authoring interface to a corporate blog, that we can say anything about the anticipated state change on the server." Is that what you meant? :-) Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
John Panzer wrote: > Messages only have useful semantics if both parties understand them. > What I recall is that there was no consensus that Atom servers must > choose "omit == unset" as opposed to "omit == don't care", and it's > therefore unspecified (by AtomPub) what happens when you omit a field. > Note that any client that cares about this must have already retrieved > the original data it's modifying and it only wanders into unspecified > territory if it starts dropping fields in the round-trip. If someone > else thinks there was actually consensus on this point please let the > AtomPub editor know about it. > There wasn't consensus on this point, but I for one continue to argue that HTTP requires that a server that substantially changes a client request not return a 200 level response to a PUT. This is irrespective of what APP says. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
John Panzer wrote: > You'd PUT the whole representation, replacing the entire entry. This > always works. It also works for removing the summary. I agree. However some people here do not agree with this, and have maintained that the server is free to do anything it likes, including retaining the old categories, adding any new ones, and still returning a 200. At this point, I simply hope that no one is unwise enough to implement that behavior, even if the spec allows it (or, more properly, does not explicitly forbid it). -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Lots of recent discussions have sent me down a path thinking again about the REST take on extensibility and also about how new services and protocols are deployed on the web. I started by observing lots of talk about deployment, which puzzled me a little. A priori, I would think that deployment concerns shouldn't drive the way you do your resource modeling when you're designing. The advice I gave the JSF folks was [1]: Seek ye first the resource model and its righteousness; and all these things shall be added unto you. I truly believe that. Spending time upfront developing a good resource model and discussing it with as many people as possible will save your bacon down the line. Good things follow from a resource model that is aligned with the underlying architecture I continue to see far too many people starting with performance or premature optimizations in mind based on such things as 1. an aversion to the link 2. an aversion to "too many HTTP requests" and performance implications thereof 3. worrying about a so-called explosion in URI space. All of these as if important resources don't need to be identified, and as if linking wasn't singularly cheap, as if you couldn't model new resources for batch operations, or indeed as if the web wasn't fairly optimized for caching. That was much of the point of HTTP 1.1 right? But I digress... On to intermediaries and all those caches. Reading Mark Nottingham's recent take on the state of proxy caching [2], I'm now wondering if intermediaries are going to be a limiting factor on experimentation on the web, much like NATs have been inhibiting the end-to-end internet. A couple of things that immediately struck me 1. intermediaries handle the major HTTP verbs well (GET/POST/PUT/DELETE are well understood), even as people on this list continue to eternally debate their semantics. However he also points out that not all of the caching intermediaries are taking full advantage of the idempotency of PUT and DELETE for further optimizations. Therein could lie a competitive advantage for those with an itch to scratch. 2. newer verbs such as those introduced by WebDAV are poorly supported, if at all. Which leads me to extensibility... Traditionally the HTTP/REST take on extensibility has been 1. new verbs (as WebDAV added to HTTP) 2. additional HTTP headers (I see lots X-* custom headers in many applications, Google's custom cache control headers are a case in point) 3. code-on-demand 4. URIs - minting new uris (which can probably be coalesced with the next point) 5. hypermedia as the engine of application state That last is the crucial one in defining the line on extensibility, namely REST seems to place the onus on the evolving set of hypermedia standards that are exchanged. Have we come to a point were extensibility in REST is now de-facto limited to points 2 to 5? And has deployment experience on the web now limited the degrees of freedom available to designers? I guess the question boils down to this: what about the verbs? We tend to preach about uniform interfaces, things that have globally understood semantics, and indeed, we harp on the small set of verbs as the selling point of REST. "Manipulation of resources through representations" using a few methods. But is the Rule of Four the best we can hope for? Is it that the social problem of standardizing exchanged hypermedia is a more tractable one that dealing with an expanded vocabulary. "Think twice. We don't need more verbs." As I project onto the wonderful world of Waka, that next generation panacea [3], I wonder about how these deployment concerns will be addressed. Has the web now come, like the internet it was built on, to have to face the analogous challenge of the NAT/firewall hobgoblin. Will any new service will have to pay fealty to our caching masters? I'd like to tie this in to the discussion around Microsoft's WEB3S [4], its resource model, and the proposal of a new HTTP verb: UPDATE. The thing that ultimately resonates is not the catchy headlines that appear to bait to some [4], nor indeed is it the discussion about the resource model, schemas, hierarchies and the like [5]. I would suggest that the main sticking point, and perhaps the sharpest criticism, is the introduction of a new verb. Now I like verbs myself, And while I wouldn't have modeled things the way Yaron and company have, I don't particularly see what's wrong with a new verb. That shouldn't be the onerous burden. In practice, however, it looks, per Mark's findings, like that will be the limiting factor for adoption of their new RESTful protocol. Might this be another argument to shy away from new HTTP methods and instead seek to model resources with the well known and understood semantics of our four horsemen of the web? Perhaps we'll find that the collections and entries of Atompub will be good enough for a large subset. I hope, and I wonder at the same time. The success of the web was predicated on that beautiful thing the uri, and on the browser, the servers, the ecosystem of HTTP libraries, and those unsung heroes: the caches. Architecting for middlemen is part of the genius of REST. The Caesar's Tax Collector Principle is one I subscribe to and I like taxes as much as the next guy, but have we reached a stage where the taxman is getting more than his due? Caches of various sorts are the main intermediaries in the wild, but there should be room for lots of innovation in this space. The pipe and filter model should allow more types of filters. What is it in the HTTP spec that makes writing intermediaries so difficult that in practice they can't handle extension methods? Is it the backwards compatibility with HTTP 1.0? Do we need errata or clarifications written for HTTP 1.1? Do we need to round up the proxies? Or is it simply inertia, that no one ever upgrades their caches once in place until the hardware dies? All that is certain in life is death and caches? Why can't I have more verbs? Is it that N-squared business all the way down? [7] Anyway just a few idle thoughts. I can think of a few follow-ups and maybe I'll write them in due course: - On resource modeling [8] - REST and the Holy Grail of Extensibility - The spy thriller: The cache who loved me. or alternatively - The horror flick: Fear of the cache [1] http://koranteng.blogspot.com/2007/04/crawl-before-you-walk.html [2] http://www.mnot.net/blog/2007/06/20/proxy_caching [3] http://gbiv.com/protocols/waka/200211_fielding_apachecon.ppt [4] http://dev.live.com/livedata/web3s.htm [5] http://www.25hoursaday.com/weblog/CommentView.aspx?guid=83d2bb00-4ad6-4af2-8c2c-d4686c446737 [6] http://www.goland.org/appanddare/ [7] http://www.dehora.net/journal/2006/03/now_they_have_nsquared_problems.html [8] Alan Dean's comment about the fundamental difficulty of resource modeling. http://tech.groups.yahoo.com/group/rest-discuss/message/8673 It is a social process and there is a bit of black magic if you aren't used to doing it. I'll suggest that discussing with as many people the details of your resource model is the first step. And like everything practice makes perfect [9] On resource modeling: Joe Gregorio always does the View Source business; also Sam and Leonard's book has 2 nice chapters on resource modeling although it wasn't called out using that term. That's my first critique of the book. The other would be the read-only versus read-write distinction. It's all resource modeling to me... Cheers, -- Koranteng Ofosu-Amaah -- Koranteng's Toli http://koranteng.blogspot.com/ -- Observers are worried
Lots of recent discussions have sent me down a path thinking again about the REST take on extensibility and also about how new services and protocols are deployed on the web. I started by observing lots of talk about deployment which puzzled me a little. A priori, I would think that deployment concerns shouldn't drive the way you do your resource modeling when you're designing. The advice I gave the JSF folks was [1]: Seek ye first the resource model and its righteousness; and all these things shall be added unto you. I truly believe that. Spending time upfront developing a good resource model and discussing it with as many people as possible will save your bacon down the line. I continue to see far too many people starting with performance or premature optimizations in mind based on such things as 1. an aversion to the link 2. an aversion to "too many HTTP requests" and performance implications thereof 3. worrying about a so-called explosion in URI space. All of these as if important resources don't need to be identified, and as if linking wasn't singularly cheap, as if you couldn't model new resources for batch operations, or indeed as if the web wasn't fairly optimized for caching. That was much of the point of HTTP 1.1 right? But I digress... On to caches. Reading Mark Nottingham's recent take on the state of proxy caching [2], I'm now wondering if intermediaries are going to be a limiting factor on experimentation on the web, much like NATs have been inhibiting the end-to-end internet. A couple of things that immediately stuck me 1. intermediaries handle the major HTTP verbs well (GET/POST/PUT/DELETE are well understood), even as on this list people continue to eternally debate their semantics. however he also points out that not all of the caching intermediaries are taking full advantage of the idempotency of PUT and DELETE for further optimizations. A competitive advantage for those with an itch to scratch 2. newer verbs such as those introduced by WebDAV are poorly supported, if at all. Which leads me to extensibility... Traditionally the HTTP/REST take on extensibility has been 1. new verbs (as WebDAV added to HTTP) 2. additional HTTP headers (I see lots X-* custom headers in many applications, Google's custom cache control headers are a case in point) 3. code-on-demand 4. URIs - minting new uris (which can probably be coalesced with the next point) 5. hypermedia as the engine of application state That last is the crucial one in defining the line on extensibility, namely REST seems to place the onus on the evolving set of hypermedia standards that are exchanged. Have we come to a point were extensibility in REST is now de-facto limited to points 2 to 5? And has deployment experience on the web now limited the degrees of freedom available to designers? I guess the question boils down to this: what about the verbs? We tend to preach about uniform interfaces, things that have globally understood semantics, indeed, we harp on the small set of verbs as the selling point of REST. "Manipulation of resources through representations" using a few methods. But is the Rule of Four the best we can hope for? Is it that the social problem of standardizing exchanged hypermedia is a more tractable one that dealing with an expanded vocabulary. "Think twice. We don't need more verbs." As I project onto the wonderful world of Waka, that next generation panacea [3], I wonder about how these deployment concerns will be addressed. Has the web now come, like the internet it was built on, to have to face the analogous challenge of the NAT/firewall hobgoblin. Will any new service will have to pay fealty to our caching masters? I'd like to tie this in to the discussion around Microsoft's WEB3S [4], its resource model and the proposal of a new HTTP verb: UPDATE. The thing that ultimately resonates is not the catchy headlines that appear to bait to some [4], nor indeed is it the discussion about the resource model, schemas, hierarchies and the like [5]. I would suggest that the main sticking point and perhaps the sharpest criticism is the introduction of a new verb. Now I like verbs myself, And while I wouldn't have modeled things the way Yaron and company did, I don't see particularly see what's wrong with a new verb. That shouldn't be the onerous burden. In practice, however, it looks, per Mark, like that will be the limiting factor for adoption of their new RESTful protocol. Might this be another argument to shy away from new HTTP methods and instead seek to model resources with the well known and understood semantics of our four horsemen of the web? Perhaps we'll find that the collections and entries of Atompub will be good enough for a large subset. I hope, and I wonder at the same time. The success of the web was predicated on that beautiful thing the uri, and on the browser, the servers, the ecosystem of HTTP libraries, and those unsung heroes the caches. Architecting for middlemen is part of the genius of REST. The Caesar's Tax collector principle is one I subscribe to. I like taxes as much as the next guy, but have we reached a stage where the taxman is getting more than his due? Caches of various sorts are the main intermediaries in the wild but there should be room for lots of innovation in this space. The pipe and filter model should allow more types of filters. What is it in the HTTP spec that makes writing intermediaries so difficult that in practice they can't handle extension methods? Is it the compatibility with HTTP 1.0? Do we need errata or clarifications written? Do we need to round up the proxies? Or is it simply inertia, that no one ever upgrades their caches once in place until the hardware dies? All that is certain in life is death and caches? Why can't I have more verbs? Is it that N-squared business all the way down? [7] Anyway just a few idle thoughts. I can think of a few follow-ups and maybe I'll write them in due course: - On resource modeling [8] - REST and the Holy Grail of Extensibility - The spy thriller: The cache who loved me. or alternatively - The horror flick: Fear of the cache [1] http://koranteng.blogspot.com/2007/04/crawl-before-you-walk.html [2] http://www.mnot.net/blog/2007/06/20/proxy_caching [3] http://gbiv.com/protocols/waka/200211_fielding_apachecon.ppt [4] http://dev.live.com/livedata/web3s.htm [5] http://www.25hoursaday.com/weblog/CommentView.aspx?guid=83d2bb00-4ad6-4af2-8c2c-d4686c446737 [6] http://www.goland.org/appanddare/ [7] http://www.dehora.net/journal/2006/03/now_they_have_nsquared_problems.html [8] Alan Dean's comment about the fundamental difficulty of resource modeling. It is a social process and there is a bit of black magic if you aren't used to doing it. I'll suggest that discussing with as many people the details of your resource model is the first step. http://tech.groups.yahoo.com/group/rest-discuss/message/8673 [9] On resource modeling: Joe Gregorio always does the View Source business; also Sam and Leonard's book has 2 nice chapters on resource modeling although it wasn't called out using that term. That's my first critique of the book: the read-only versus read-write distinction. It's all resource modeling to me... Cheers, -- Koranteng Ofosu-Amaah -- Koranteng's Toli - the blog edition http://koranteng.blogspot.com/
On 10/06/07, Jan Algermissen <algermissen1971@...> wrote: > in HTTP, is there a way to be sure that a representation received > upon a GET is definitely NOT coming from any (possibly > malfunctioning) cache, but really from the origin server? > > The background for the question is the Reliable-POST issue and it has > been raised that, when the server supplies unique IDs for the client > to include in its POST requests, malfunctioning caches would make it > possible for two clients to receive the same ID. > > A way to be absolutely sure that the GET response comes from the > origin server would solve that problem. Wouldn't the Date header do this job? I tested with my university's proxy (squid?) and if it's a cache hit, the Date is "old". Furthermore if two consecutive GET requests have the same Date value, means that at least the second is cached. GET is safe/idempotent, so on a GET if you're not sure if the Date is old, do another, and see if it is the same, if not you may record at client the clock skew. Cheers, -- Laurian Gridinoc, purl.org/net/laur
Lots of recent discussions have sent me down a path thinking again about the REST take on extensibility and also about how new services and protocols are deployed on the web. I started by observing lots of talk about deployment, which puzzled me a little. A priori, I would think that deployment concerns shouldn't drive the way you do your resource modeling when you're designing. The advice I gave the JSF folks was [1]: Seek ye first the resource model and its righteousness; and all these things shall be added unto you. I truly believe that. Spending time upfront developing a good resource model and discussing it with as many people as possible will save your bacon down the line. Good things follow from a resource model that is aligned with the underlying architecture. I continue to see far too many people starting with performance or premature optimizations in mind based on such things as 1. an aversion to the link 2. an aversion to "too many HTTP requests" and performance implications thereof 3. worrying about a so-called explosion in URI space. All of these as if important resources don't need to be identified, and as if linking wasn't singularly cheap, as if you couldn't model new resources for batch operations, or indeed as if the web wasn't fairly optimized for caching. That was much of the point of HTTP 1.1 right? But I digress... On to intermediaries and all those caches. Reading Mark Nottingham's recent take on the state of proxy caching [2], I'm now wondering if intermediaries are going to be a limiting factor on experimentation on the web, much like NATs have been inhibiting the end-to-end internet. A couple of things that immediately struck me 1. intermediaries handle the major HTTP verbs well (GET/POST/PUT/DELETE are well understood), even as people on this list continue to eternally debate their semantics. However he also points out that not all of the caching intermediaries are taking full advantage of the idempotency of PUT and DELETE for further optimizations. Therein could lie a competitive advantage for those with an itch to scratch. 2. newer verbs such as those introduced by WebDAV are poorly supported, if at all. Which leads me to extensibility... Traditionally the HTTP/REST take on extensibility has been 1. new verbs (as WebDAV added to HTTP) 2. additional HTTP headers (I see lots X-* custom headers in many applications, Google's custom cache control headers are a case in point) 3. code-on-demand 4. URIs - minting new uris (which can probably be coalesced with the next point) 5. hypermedia as the engine of application state That last is the crucial one in defining the line on extensibility, namely REST seems to place the onus on the evolving set of hypermedia standards that are exchanged. Have we come to a point were extensibility in REST is now de-facto limited to points 2 to 5? And has deployment experience on the web now limited the degrees of freedom available to designers? I guess the question boils down to this: what about the verbs? We tend to preach about uniform interfaces, things that have globally understood semantics, and indeed, we harp on the small set of verbs as the selling point of REST. "Manipulation of resources through representations" using a few methods. But is the Rule of Four the best we can hope for? Is it that the social problem of standardizing exchanged hypermedia is a more tractable one that dealing with an expanded vocabulary. "Think twice. We don't need more verbs." As I project onto the wonderful world of Waka, that next generation panacea [3], I wonder about how these deployment concerns will be addressed. Has the web now come, like the internet it was built on, to have to face the analogous challenge of the NAT/firewall hobgoblin. Will any new service will have to pay fealty to our caching masters? I'd like to tie this in to the discussion around Microsoft's WEB3S [4], its resource model, and the proposal of a new HTTP verb: UPDATE. The thing that ultimately resonates is not the catchy headlines that appear to bait to some [4], nor indeed is it the discussion about the resource model, schemas, hierarchies and the like [5]. I would suggest that the main sticking point, and perhaps the sharpest criticism, is the introduction of a new verb. Now I like verbs myself, And while I wouldn't have modeled things the way Yaron and company have, I don't particularly see what's wrong with a new verb. That shouldn't be the onerous burden. In practice, however, it looks, per Mark's findings, like that will be the limiting factor for adoption of their new RESTful protocol. Might this be another argument to shy away from new HTTP methods and instead seek to model resources with the well known and understood semantics of our four horsemen of the web? Perhaps we'll find that the collections and entries of Atompub will be good enough for a large subset. I hope, and I wonder at the same time. The success of the web was predicated on that beautiful thing the uri, and on the browser, the servers, the ecosystem of HTTP libraries, and those unsung heroes: the caches. Architecting for middlemen is part of the genius of REST. The Caesar's Tax Collector Principle is one I subscribe to and I like taxes as much as the next guy, but have we reached a stage where the taxman is getting more than his due? Caches of various sorts are the main intermediaries in the wild, but there should be room for lots of innovation in this space. The pipe and filter model should allow more types of filters. What is it in the HTTP spec that makes writing intermediaries so difficult that in practice they can't handle extension methods? Is it the backwards compatibility with HTTP 1.0? Do we need errata or clarifications written for HTTP 1.1? Do we need to round up the proxies? Or is it simply inertia, that no one ever upgrades their caches once in place until the hardware dies? All that is certain in life is death and caches? Why can't I have more verbs? Is it that N-squared business all the way down? [7] Anyway just a few idle thoughts. I can think of a few follow-ups and maybe I'll write them in due course: - On resource modeling [8] - REST and the Holy Grail of Extensibility - The spy thriller: The cache who loved me. or alternatively - The horror flick: Fear of the cache [1] http://koranteng.blogspot.com/2007/04/crawl-before-you-walk.html [2] http://www.mnot.net/blog/2007/06/20/proxy_caching [3] http://gbiv.com/protocols/waka/200211_fielding_apachecon.ppt [4] http://dev.live.com/livedata/web3s.htm [5] http://www.25hoursaday.com/weblog/CommentView.aspx?guid=83d2bb00-4ad6-4af2-8c2c-d4686c446737 [6] http://www.goland.org/appanddare/ [7] http://www.dehora.net/journal/2006/03/now_they_have_nsquared_problems.html [8] Alan Dean's comment about the fundamental difficulty of resource modeling. http://tech.groups.yahoo.com/group/rest-discuss/message/8673 It is a social process and there is a bit of black magic if you aren't used to doing it. I'll suggest that discussing with as many people the details of your resource model is the first step. And like everything practice makes perfect [9] On resource modeling: Joe Gregorio always does the View Source business; also Sam and Leonard's book has 2 nice chapters on resource modeling although it wasn't called out using that term. That's my first critique of the book. The other would be the read-only versus read-write distinction. It's all resource modeling to me... Cheers, -- Koranteng Ofosu-Amaah -- Koranteng's Toli http://koranteng.blogspot.com/ -- Observers are worried
[ Attachment content not displayed ]
<administrivia> I apologize for the duplicate messages, not the fault of Koranteng. These were caught in the spam filter on Yahoo & I just forwarded all of them without reading them closely enough. Mike (part-time moderator) </administrivia> > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Koranteng > Ofosu-Amaah > Sent: Saturday, June 23, 2007 5:40 AM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] On resource modeling, intermediaries > and deployment > > Lots of recent discussions have sent me down a path thinking > again about the REST take on extensibility and also about how > new services and protocols are deployed on the web. > > I started by observing lots of talk about deployment, which > puzzled me a little. A priori, I would think that deployment > concerns shouldn't drive the way you do your resource > modeling when you're designing. The advice I gave the JSF > folks was [1]: > > Seek ye first the resource model and its righteousness; and > all these things shall be added unto you. > > I truly believe that. Spending time upfront developing a good > resource model and discussing it with as many people as > possible will save your bacon down the line. Good things > follow from a resource model that is aligned with the > underlying architecture. > > I continue to see far too many people starting with > performance or premature optimizations in mind based on such things as > > 1. an aversion to the link > 2. an aversion to "too many HTTP requests" and performance > implications thereof 3. worrying about a so-called explosion > in URI space. > > All of these as if important resources don't need to be > identified, and as if linking wasn't singularly cheap, as if > you couldn't model new resources for batch operations, or > indeed as if the web wasn't fairly optimized for caching. > That was much of the point of HTTP 1.1 right? > > But I digress... On to intermediaries and all those caches. > > Reading Mark Nottingham's recent take on the state of proxy > caching [2], I'm now wondering if intermediaries are going to > be a limiting factor on experimentation on the web, much like > NATs have been inhibiting the end-to-end internet. > > A couple of things that immediately struck me > > 1. intermediaries handle the major HTTP verbs well > (GET/POST/PUT/DELETE are well understood), even as people on > this list continue to eternally debate their semantics. > However he also points out that not all of the caching > intermediaries are taking full advantage of the idempotency > of PUT and DELETE for further optimizations. Therein could > lie a competitive advantage for those with an itch to scratch. > > 2. newer verbs such as those introduced by WebDAV are poorly > supported, if at all. > > Which leads me to extensibility... > > Traditionally the HTTP/REST take on extensibility has been > > 1. new verbs (as WebDAV added to HTTP) > 2. additional HTTP headers (I see lots X-* custom headers in > many applications, Google's custom cache control headers are a case in > point) > 3. code-on-demand > 4. URIs - minting new uris (which can probably be coalesced > with the next point) 5. hypermedia as the engine of application state > > That last is the crucial one in defining the line on > extensibility, namely REST seems to place the onus on the > evolving set of hypermedia standards that are exchanged. > > Have we come to a point were extensibility in REST is now > de-facto limited to points 2 to 5? And has deployment > experience on the web now limited the degrees of freedom > available to designers? > > I guess the question boils down to this: what about the verbs? > > We tend to preach about uniform interfaces, things that have > globally understood semantics, and indeed, we harp on the > small set of verbs as the selling point of REST. > "Manipulation of resources through representations" using a > few methods. But is the Rule of Four the best we can hope > for? Is it that the social problem of standardizing exchanged > hypermedia is a more tractable one that dealing with an > expanded vocabulary. > > "Think twice. We don't need more verbs." > > As I project onto the wonderful world of Waka, that next > generation panacea [3], I wonder about how these deployment > concerns will be addressed. Has the web now come, like the > internet it was built on, to have to face the analogous > challenge of the NAT/firewall hobgoblin. > Will any new service will have to pay fealty to our caching masters? > > I'd like to tie this in to the discussion around Microsoft's > WEB3S [4], its resource model, and the proposal of a new HTTP > verb: UPDATE. > > The thing that ultimately resonates is not the catchy > headlines that appear to bait to some [4], nor indeed is it > the discussion about the resource model, schemas, hierarchies > and the like [5]. I would suggest that the main sticking > point, and perhaps the sharpest criticism, is the > introduction of a new verb. > > Now I like verbs myself, And while I wouldn't have modeled > things the way Yaron and company have, I don't particularly > see what's wrong with a new verb. That shouldn't be the > onerous burden. In practice, however, it looks, per Mark's > findings, like that will be the limiting factor for adoption > of their new RESTful protocol. > > Might this be another argument to shy away from new HTTP > methods and instead seek to model resources with the well > known and understood semantics of our four horsemen of the > web? Perhaps we'll find that the collections and entries of > Atompub will be good enough for a large subset. I hope, and I > wonder at the same time. > > The success of the web was predicated on that beautiful thing > the uri, and on the browser, the servers, the ecosystem of > HTTP libraries, and those unsung heroes: the caches. > Architecting for middlemen is part of the genius of REST. The > Caesar's Tax Collector Principle is one I subscribe to and I > like taxes as much as the next guy, but have we reached a > stage where the taxman is getting more than his due? > > Caches of various sorts are the main intermediaries in the > wild, but there should be room for lots of innovation in this > space. The pipe and filter model should allow more types of > filters. What is it in the HTTP spec that makes writing > intermediaries so difficult that in practice they can't > handle extension methods? Is it the backwards compatibility > with HTTP 1.0? Do we need errata or clarifications written > for HTTP 1.1? Do we need to round up the proxies? Or is it > simply inertia, that no one ever upgrades their caches once > in place until the hardware dies? All that is certain in life > is death and caches? > > Why can't I have more verbs? Is it that N-squared business > all the way down? [7] > > Anyway just a few idle thoughts. > > I can think of a few follow-ups and maybe I'll write them in > due course: > > - On resource modeling [8] > - REST and the Holy Grail of Extensibility > - The spy thriller: The cache who loved me. or alternatively > - The horror flick: Fear of the cache > > [1] http://koranteng.blogspot.com/2007/04/crawl-before-you-walk.html > > [2] http://www.mnot.net/blog/2007/06/20/proxy_caching > > [3] http://gbiv.com/protocols/waka/200211_fielding_apachecon.ppt > > [4] http://dev.live.com/livedata/web3s.htm > > [5] > http://www.25hoursaday.com/weblog/CommentView.aspx?guid=83d2bb > 00-4ad6-4af2-8c2c-d4686c446737 > > [6] http://www.goland.org/appanddare/ > > [7] > http://www.dehora.net/journal/2006/03/now_they_have_nsquared_p > roblems.html > > [8] Alan Dean's comment about the fundamental difficulty of > resource modeling. > http://tech.groups.yahoo.com/group/rest-discuss/message/8673 > > It is a social process and there is a bit of black magic if > you aren't used to doing it. I'll suggest that discussing > with as many people the details of your resource model is the > first step. And like everything practice makes perfect > > [9] On resource modeling: Joe Gregorio always does the View > Source business; also Sam and Leonard's book has 2 nice > chapters on resource modeling although it wasn't called out > using that term. That's my first critique of the book. The > other would be the read-only versus read-write distinction. > It's all resource modeling to me... > > Cheers, > -- > Koranteng Ofosu-Amaah > -- > Koranteng's Toli > http://koranteng.blogspot.com/ > -- > Observers are worried > > > > Yahoo! Groups Links > > >
[ Attachment content not displayed ]
There are several concurrent threads discussing similar points about HTTP. To pick one example, "Where does RFC 2616 say POST MUST be non- idempotent?" I say it matters not, because HTTP is not REST. An HTTP application is implemented RESTfully by constraining messages to be self- descriptive, etc. In one application, POST may be constrained to be idempotent and in another, POST may be constrained to be non-idempotent. The same goes for the debate over partial vs. full PUT. Either approach may be part of a RESTful system and either approach may be described by RFC 2616, but this does not make the protocol vague because the append/annotate semantics of POST are specific. If I constrain POST to be non-idempotent in my REST API, that still isn't a guarantee that it won't appear to be idempotent from the user-agent perspective over some arbitrary period of time. If I constrain POST to be idempotent in my REST API, more power to me but I can't expect to get any of the benefits of scale possible with methods recognized as idempotent in the established protocols. HTTP user-agents won't treat multiple POST requests as idempotent even if the responses appear idempotent because the non-idempotent semantics intended for POST are specific. Similarly, in one REST application PUT may be constrained to be a full replacement, while in another REST application PUT may be treated as a partial replacement. RFC 2616 may be interpreted to describe either approach, but this doesn't make HTTP vague because the replacement (storage?) semantics of PUT are clear. If my REST API generates an XHTML representation with a server-assigned <title> and it constrains PUT to be a full replacement, attempts by the user-agent to edit the <title> will fail in the same way even when the PUT is repeated without resulting in a 400 response. The semantics are replacement, not merge, even though the result is a partial update because the idempotent semantics intended for PUT are specific. So +1 from me for reviving PATCH to provide an HTTP method which constrains messages as having merge semantics. Even if it isn't in RFC 2616, the use of an established, self-descriptive protocol method which applies a merge constraint on the communication between the components of a distributed hypermedia system is very much in accordance with REST, whereas the assignment of merge semantics to PUT (even in a non-HTTP protocol) is not. Nor is the introduction of a new method whose semantics overlap those of other methods, because in a truly RESTful API each verb used maps to a different user-action and set of appropriate response codes, i.e. each method must have its own semantics. Since Web3S is not an HTTP protocol we may evaluate it purely in terms of REST rather than in terms of RFC 2616. It is my position that when designing a new protocol which re-uses existing, similar methods that the semantics of the established methods not be changed. Which means I am not against the introduction of new methods in a new protocol per se, only -1 against the use of UPDATE in Web3S. REST application semantics must be defined by the network interface, not the media type. Web3S fails to constrain protocol methods to have different meanings, therefore its messages are not self-descriptive. Both PUT and UPDATE have merge semantics in Web3S, while no method is constrained to have replacement semantics. The established protocols are specific about the intent of application actions, PUT clearly intends to have replacement semantics and PATCH clearly intends to have merge semantics. The failure to constrain the communication between components in a Web3S interaction leads to the requirement that the components understand that the media type or schema override the established replacement semantics of PUT with the semantics of merge, and no intermediary could ever hope to figure out the semantics of UPDATE. Which makes Web3S appear to be a library-based, rather than a network- based, API. To paraphrase Dr. Fielding: "The result is an application that forbids any layers of transformation and indirection that are independent of the information origin, which is not so useful for an Internet-scale, multi-organization, anarchically scalable information system" because the interface is using nonstandard semantics, i.e. is not really generic. While requests may be directed at resources, it still smells RPCish to me, because Web3S doesn't seem to be much more than a transport protocol. Resource-oriented? Yes. REST? Sadly, no. While neither HTTP nor Web3S are REST, a RESTful API may be implemented using HTTP, but not with Web3S as currently written. Thesis references follow, Eric ========================================== 5.3.1 Process View REST enables intermediate processing by constraining messages to be self-descriptive: interaction is stateless between requests, standard methods and media types are used to indicate semantics and exchange information, and responses explicitly indicate cacheability. 5.4 Related Work [T]he real WWW architecture is independent of any single implementation. The modern Web is defined by its standard interfaces and protocols, not how those interfaces and protocols are implemented in a given piece of software. 6.2.2 Manipulating Shadows An origin server maintains a mapping from resource identifiers to the set of representations corresponding to each resource. A resource is therefore manipulated by transferring representations through the generic interface defined by the resource identifier... Forcing the interface definitions to match the interface requirements causes the protocols to seem vague, but that is only because the interface being manipulated is only an interface and not an implementation. The protocols are specific about the intent of an application action, but the mechanism behind the interface must decide how that intention affects the underlying implementation of the resource mapping to representations. 6.2.4 Binding Semantics to URI It is the nature of every engineer to define things in terms of the characteristics of the components that will be used to compose the finished product. The Web doesn't work that way. The Web architecture consists of constraints on the communication model between components, based on the role of each component during an application action. This prevents the components from assuming anything beyond the resource abstraction, thus hiding the actual mechanisms on either side of the abstract interface. 6.5.1 Advantages of a Network-based API A network-based API is an on-the-wire syntax, with defined semantics, for application interactions. A network-based API does not place any restrictions on the application code aside from the need to read/write to the network, but does place restrictions on the set of semantics that can be effectively communicated across the interface. On the plus side, performance is only bounded by the protocol design and not by any particular implementation of that design. A library-based API does a lot more for the programmer, but in doing so creates a great deal more complexity and baggage than is needed by any one system, is less portable in a heterogeneous network, and always results in genericity being preferred over performance. As a side-effect, it also leads to lazy development (blaming the API code for everything) and failure to account for non-cooperative behavior by other parties in the communication. 6.5.2 HTTP is not RPC What makes HTTP significantly different from RPC is that the requests are directed to resources using a generic interface with standard semantics that can be interpreted by intermediaries almost as well as by the machines that originate services. The result is an application that allows for layers of transformation and indirection that are independent of the information origin, which is very useful for an Internet-scale, multi-organization, anarchically scalable information system. 6.5.3 HTTP is not a Transport Protocol It is possible to achieve a wide range of functionality using this very simple interface, but following the interface is required in order for HTTP semantics to remain visible to intermediaries. A true application of HTTP maps the protocol user's actions to something that can be expressed using HTTP semantics, thus creating a network-based API to services which can be understood by agents and intermediaries without any knowledge of the application.
On 6/24/07, Nick Gall <nick.gall@...> wrote: > On 6/23/07, Alan Dean <alan.dean@...> wrote: > > Forgive me for nit-picking, but I think that Roy was saying that the > > HTTP spec doesn't need to be prescriptive, but the the application > > protocol layered on top *does*: > > > > "It is only when we talk about specific applications of AtomPP, such > > as an authoring interface to a corporate blog, that we can say > > anything about the anticipated state change on the server." > > > > Is that what you meant? :-) > > > > Absolutely! /me nods > But the particular application protocol derived from the HTTP > protocol can only have the freedom to constrain in a particular prescriptive > way if the semantics of HTTP are open ended enough to allow such freedom. Right - HTTP defines the envelope of permitted behaviours. > Mandating that PUT entail ONLY replacement semantics severely constrains the > "prescriptive freedom" of derivative application protocols. I have to say that it has taken a long trip round the houses, but I feel that I do understand your point now ;-) It seems to me that the corollary of your position is that REST *in practice* requires an application protocol in order for the semantics to be fully known between UA and server. In particular, you would say that this applies to POST (which everyone regards as ambiguous) and to PUT (which most people currently regard as unambiguous). Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
On 6/11/07, Laurian Gridinoc <laurian@...> wrote: > > On 10/06/07, Jan Algermissen <algermissen1971@...> wrote: > > in HTTP, is there a way to be sure that a representation received > > upon a GET is definitely NOT coming from any (possibly > > malfunctioning) cache, but really from the origin server? > > > > The background for the question is the Reliable-POST issue and it has > > been raised that, when the server supplies unique IDs for the client > > to include in its POST requests, malfunctioning caches would make it > > possible for two clients to receive the same ID. > > > > A way to be absolutely sure that the GET response comes from the > > origin server would solve that problem. > > Wouldn't the Date header do this job? > I tested with my university's proxy (squid?) and if it's a cache hit, > the Date is "old". ... unless the server clock is wrong.
Alan Dean wrote: > PUT /robots.txt > > ... replaces the whole file > > I hope that the above is a trivial enough example to be uncontentious. > I would think that everyone can agree that the semantics are not > ambiguous for robots.txt I wish that were true. It is not. Some people believe that it is within the bounds of the spec for PUT /robots.txt to merge the new content with the old content. I do not so believe, but the APP working group (or at least an apparent majority of their members) do. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 6/24/07, Elliotte Harold <elharo@...> wrote: > Alan Dean wrote: > > > PUT /robots.txt > > > > ... replaces the whole file > > > > I hope that the above is a trivial enough example to be uncontentious. > > I would think that everyone can agree that the semantics are not > > ambiguous for robots.txt > > I wish that were true. It is not. > > Some people believe that it is within the bounds of the spec for PUT > /robots.txt to merge the new content with the old content. > > I do not so believe, but the APP working group (or at least an apparent > majority of their members) do. /me nods From the subthread discussion with Nick, I think that I understand now the basis used to assert that HTTP permits this interpretation of PUT. What I don't understand is why it might be considered a good idea to apply make PUT ambiguous in that way. Where's the advantage, I ask myself? Why go to all that trouble when you already have an ambiguous method, POST, sitting there ready to do your bidding as you see fit... Who knows - perhaps I simply lack the necessary enlightenment, but I just don't see it. Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
Elliotte Harold wrote: > John Panzer wrote: > > >> You'd PUT the whole representation, replacing the entire entry. This >> always works. It also works for removing the summary. >> > > > I agree. However some people here do not agree with this, and have > maintained that the server is free to do anything it likes, including > retaining the old categories, adding any new ones, and still returning a > 200. > I think that the majority of the AtomPub working group simply couldn't figure out how to disallow "bad" server behavior in the spec without also outlawing "good" server behavior. For background, the AtomPub spec already had opened the barn door to servers modifying the resource and returning 200. Trying to legislate hard limits on this behavior leads to silliness, even though you don't want servers to abuse it. For example, one could say that a server's resource update on a PUT is allowed to depend on the curent server time, authorized user, phase of the moon, and basically the entire state of the universe, except for the prior state of the target resource. Nobody could figure out how this restriction would be helpful, though. > At this point, I simply hope that no one is unwise enough to implement > that behavior, even if the spec allows it (or, more properly, does not > explicitly forbid it). > > The WG couldn't think of anything better than to leave this to the ineffable forces of the free market. -John
On 6/23/07, John Panzer <jpanzer@...> wrote: > > Elliotte Harold wrote: > > I agree. However some people here do not agree with this, and have > > maintained that the server is free to do anything it likes, including > > retaining the old categories, adding any new ones, and still returning a > > 200. > > I think that the majority of the AtomPub working group simply couldn't figure out how to > disallow "bad" server behavior in the spec without also outlawing "good" server behavior. The HTTP spec makes exactly this point. That's why it doesn't define how a PUT request affects the state of the server. I find it very puzzling that Julian is the only other person in this thread that seems to understand that the semantics of PUT are unambiguous, while the requirements on the servers are completely undefined. 1.) the semantics of PUT are unambiguous 2.) requirements on servers receiving PUT requests are undefined Both are true, and #2 does not change #1. Understanding that these two facts can be simultaneously true is key to understanding HTTP. It's not a problem, and it's not underspecified. The actual problem we're encountering is that people think they need PATCH. Maybe they do. Or maybe they need POST and ad-hoc delta formats... hey, that's how forms work. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Alan Dean wrote: > ... > > PUT /robots.txt > > ... replaces the whole file > How about PUT /robots.txt Content-Range: bytes=50-80/500 ... ? [http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html]
On Jun 23, 2007, at 2:48 AM, Mike Dierken wrote: > A full update via PUT has the possibility of being cached without > requiring the new version of the resource to be sent in the > response, whereas a partial update via PUT does not allow this > possibility. Everybody seems to consider the logging of a GET request as something that a server can do without violating the "SAFE" constraint. It seems to me that the right way to handle PUT is similar - a PUT should be a "logical" replace, but I can't see anything wrong with the server adding a link to the representation, for example to point to some resource updated as a side-effect. Caching the representation that is being PUT would make this impossible. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On 6/24/07, John Panzer <jpanzer@...> wrote: > Alan Dean wrote: > > ... > > > > PUT /robots.txt > > > > ... replaces the whole file > > > How about > > PUT /robots.txt > Content-Range: bytes=50-80/500 > ... > > ? > > [http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html] ... and your point is? At no point did I say that the PUT was a byte-for-byte representation. It could be a representation rendered in any number of MIME types, so I don't see how Content-Ranges is applicable to the problem domain we are discussing. My use of robots.txt in my example had nothing to do with it being a textfile, and everything to do with the fact it is a well-known non-hierarchical representation (as the point I was making pertained to hierarchy). Am I to infer that you think a RESTful protocol should specify a PUT where you 'swap out' a byte range based upon Content-Range? Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
John Panzer wrote: > > > Alan Dean wrote: > > ... > > > > PUT /robots.txt > > > > ... replaces the whole file > > > How about > > PUT /robots.txt > Content-Range: bytes=50-80/ 500 > ... > > ? > ... This has been discussed often enough on the WebDAV mailing list. The main problem is that it's hard to deploy, because many deployed servers ignore "Content-Range" upon PUT, so the request would damage the content. The solution to this, again, is PATCH with a patch format that allows these kinds of modifications. Best regards, Julian
Koranteng Ofosu-Amaah wrote: > ... > 1. intermediaries handle the major HTTP verbs well > (GET/POST/PUT/ DELETE are well understood), even as on this list people > continue to eternally debate their semantics. however he also points > out that not all of the caching intermediaries are taking full > advantage of the idempotency of PUT and DELETE for further > optimizations. A competitive advantage for those with an itch to > scratch > ... I think cache optimizations by PUT and DELETE are overrated. A cache can (and SHOULD) note that the things identified by the Request-URI are stale, but that's really it. Furthermore, the optimization in general will only affect the single cache the request went through. > 2. newer verbs such as those introduced by WebDAV are poorly > supported, if at all. Mark wrote: "GET, HEAD, POST, PUT, DELETE, OPTIONS, and TRACE all seemed to work OK, but quite a few caches had problems with extension HTTP methods. If you�re using non-standard HTTP methods (or even some of the more esoteric WebDAV methods; there are a lot of them), beware." So I'm not sure what "poorly" supported means here. In general, a cache only needs to understand the message transmission rules of HTTP to support *any* method. It would be nice to know what the problems Mark saw were, though. > Which leads me to extensibility. .. > > Traditionally the HTTP/REST take on extensibility has been > > 1. new verbs (as WebDAV added to HTTP) > 2. additional HTTP headers (I see lots X-* custom headers in many > applications, Google's custom cache control headers are a case in > point) > 3. code-on-demand > 4. URIs - minting new uris (which can probably be coalesced with the > next point) > 5. hypermedia as the engine of application state > > That last is the crucial one in defining the line on extensibility, > namely REST seems to place the onus on the evolving set of hypermedia > standards that are exchanged. > > Have we come to a point were extensibility in REST is now de-facto > limited to points 2 to 5? And has deployment experience on the web now > limited the degrees of freedom available to designers? My experience with developing (and supporting) the HTTP server and client stack in one of SAP's portal products for many years says that new methods do not cause major problems. That may be influenced by the fact that most *authoring* goes over HTTPS, and thus caches won't be able to do any harm. > ... > I'd like to tie this in to the discussion around Microsoft's WEB3S > [4], its resource model and the proposal of a new HTTP verb: UPDATE. > > The thing that ultimately resonates is not the catchy headlines that > appear to bait to some [4], nor indeed is it the discussion about the > resource model, schemas, hierarchies and the like [5]. I would suggest > that the main sticking point and perhaps the sharpest criticism is the > introduction of a new verb. > > Now I like verbs myself, And while I wouldn't have modeled things the > way Yaron and company did, I don't see particularly see what's wrong > with a new verb. That shouldn't be the onerous burden. In practice, > however, it looks, per Mark, like that will be the limiting factor for > adoption of their new RESTful protocol. > ... I don't think is true. For instance, judging from the market share, a big majority of browsers supports arbitrary HTTP methods in XmlHttpRequest (Firefox, IE6 + ActiveX-XHR, IE7 + ActiveX-XHR). As far as I can tell, only Safari, Opera and the new native XHR support in IE7 have problems, and I have made sure that Yaron is aware of that :-). > Might this be another argument to shy away from new HTTP methods and > instead seek to model resources with the well known and understood > semantics of our four horsemen of the web? Perhaps we'll find that the > collections and entries of Atompub will be good enough for a large > subset. I hope, and I wonder at the same time. I sympathize with those who dislike "arbitrary" new methods. In general I would argue that a new method should be usable to a wide range of scenarios. For instance: MKCOL, COPY, MOVE (RFC2518): if a server implements a hierarchical namespace and wants to enable a client to manipulate it, this seems to be sufficiently generic, well understood and simple to implement. VERSION-CONTROL, CHECKIN, CHECKOUT (RFC3253): similarly for in-place version control. ...however...: MKCALENDAR (RFC4791): here's a verb used for a single use case; creating a collection with a specific constraint. Adding a new method for each of these will cause the introduction of many new methods that essentially do the same thing, and only differ in the name. Proof: MKADDRESSBOOKBOOK (<http://tools.ietf.org/html/draft-daboo-carddav-02#section-6.3.1>). The right thing here would have been an extension to MKCOL, supporting all these special cases. > ... Best regards, Julian
[ Attachment content not displayed ]
: You can't do: PUT http://www.marcdegraauw.com/friend/markbaker, : because RFC2616 says: "the URI in a PUT request identifies the entity : enclosed with the request" and http://www.marcdegraauw.com/friend/markbaker : does not identify the comment, but the page-to-be-created. In concrete terms, "you can't do" means what? He can't put those bits on the wire? He can't mean "replace that with this"? I say he can do both. :You can do: POST : http://www.marcdegraauw.com/friend/ with 'markbaker' and the comment in the : body. But if you POST once, twice or N times, my server will end up in : exactly the same state: once a friend, always a (=1) friend. Sounds pretty : idempotent to me. If you implement it that way, fine, but there is still no promise of idempotence coming from HTTP POST. And I think what you and others on this list are still missing is that there is no "non-idempotent" constraint anywhere. It would be lunacy. Walden
Julian Reschke wrote: > John Panzer wrote: > >> >>Alan Dean wrote: >> > ... >> > >> > PUT /robots.txt >> > >> > ... replaces the whole file >> > >>How about >> >>PUT /robots.txt >>Content-Range: bytes=50-80/ 500 >>... >> >>? > > > ... > > This has been discussed often enough on the WebDAV mailing list. > > The main problem is that it's hard to deploy, because many deployed > servers ignore "Content-Range" upon PUT, so the request would damage the > content. I had thought that Accept-Ranges: bytes would address this, but upon closer reading realized that it's advertising support for GET ranges only. So assume for the moment that a server needs to advertise its support for this extension some way (as it would for PATCH + some specific delta format); say, Accept-Put-Ranges: . Are there other problems? -John
On 24.06.2007, at 15:41, Walden Mathews wrote: > > And I think what you and others on this list are still missing is > that there > is > no "non-idempotent" constraint anywhere. It would be lunacy. Why not see it this way (and propably that is what you are saying, not sure): Communication between independent processes is essentially concerned with the coordination of these processes. From this POV, every single request is a single distinct act of coordination and therefore of distinct significance. That is the default - if it was not, the comunication would be useless, it would never achieve any form of coordination. In addition, some particular kinds of requests are explicitly marked as beeing idempotent. Any request kind that is not marked as idempotent must inherently be non-idempotent - from the point of view of achieving coordination between processes. Turn this around to: "If you want to place the same order twice, PUT just won't do it". Jan > Walden > > > > > Yahoo! Groups Links > > >
Jan, I don't think that touches on what I'm saying. What is "non-idempotent" to you? Give an example, and then give a counterexample. Walden ----- Original Message ----- From: "Jan Algermissen" <algermissen1971@...> To: "Walden Mathews" <waldenm@...> Cc: "Marc de Graauw" <marc@...>; "'Rest List'" <rest-discuss@yahoogroups.com> Sent: Sunday, June 24, 2007 10:11 AM Subject: Re: [rest-discuss] Must POST be non-idempotent? : : On 24.06.2007, at 15:41, Walden Mathews wrote: : : > : > And I think what you and others on this list are still missing is : > that there : > is : > no "non-idempotent" constraint anywhere. It would be lunacy. : : : Why not see it this way (and propably that is what you are saying, : not sure): : : Communication between independent processes is essentially concerned : with : the coordination of these processes. From this POV, every single : request is a : single distinct act of coordination and therefore of distinct : significance. : : That is the default - if it was not, the comunication would be : useless, it would : never achieve any form of coordination. : : In addition, some particular kinds of requests are explicitly marked : as beeing : idempotent. Any request kind that is not marked as idempotent must : inherently : be non-idempotent - from the point of view of achieving coordination : between processes. : : : Turn this around to: "If you want to place the same order twice, PUT : just won't do it". : : Jan : : : : : : > Walden : > : > : > : > : > Yahoo! Groups Links : > : > : > : : : __________ NOD32 2349 (20070623) Information __________ : : This message was checked by NOD32 antivirus system. : http://www.eset.com : :
Alan Dean wrote: > On 6/24/07, John Panzer <jpanzer@...> wrote: > >>Alan Dean wrote: >> >>>... >>> >>>PUT /robots.txt >>> >>>... replaces the whole file >>> >> >>How about >> >>PUT /robots.txt >>Content-Range: bytes=50-80/500 >>... >> >>? >> >>[http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html] > > > ... and your point is? It's a question, not a point. I'm honestly trying to seek viewpoints. It seems to me that Content-Range: on PUT, if supported, means that the client is no longer replacing the entire resource, only part of it. So the new state depends on the old state and the state transmitted by the PUT. Which is what the anti-partial-PUTtians disagree with strongly, so it's interesting to note that the base HTTP spec at least appears to specify a mechanism for doing it. I don't know whether a RESTful protocol should leverage this or not. I'm first trying to determine whether there is some fundamental problem with the concept. If there is no fundamental problem, it could then be compared against alternatives such as POST and PATCH. (And UPDATE, though I have a fundamental problem with UPDATE if POST and PATCH are actually both in the running: Don't multiply methods without necessity.) -John PS: I will note that Content-Range: lets you treat a non-hierarchical resouce psuedeo-hierarchically, for what it's worth. Not sure that goes anywhere.
On 24.06.2007, at 17:25, Walden Mathews wrote: > Jan, > > I don't think that touches on what I'm saying. ok, sorry. > > What is "non-idempotent" to you? Give an example, and > then give a counterexample. Non-idempotent: Any method about which the method definition does not say: "You may call this method as often as you like - the server will not regard the number of invocation[1] as significant to the coordination that is achieved" Idenpotent: Any method about which the method definition *does* say: "You may call this method as often as you like - the server will not regard the number of invocation[1] as significant to the coordination that is achieved" [1] of course we are talking about requests with identicall payload here Jan > > Walden > > ----- Original Message ----- > From: "Jan Algermissen" <algermissen1971@...> > To: "Walden Mathews" <waldenm@...> > Cc: "Marc de Graauw" <marc@...>; "'Rest List'" > <rest-discuss@yahoogroups.com> > Sent: Sunday, June 24, 2007 10:11 AM > Subject: Re: [rest-discuss] Must POST be non-idempotent? > > > : > : On 24.06.2007, at 15:41, Walden Mathews wrote: > : > : > > : > And I think what you and others on this list are still missing is > : > that there > : > is > : > no "non-idempotent" constraint anywhere. It would be lunacy. > : > : > : Why not see it this way (and propably that is what you are saying, > : not sure): > : > : Communication between independent processes is essentially concerned > : with > : the coordination of these processes. From this POV, every single > : request is a > : single distinct act of coordination and therefore of distinct > : significance. > : > : That is the default - if it was not, the comunication would be > : useless, it would > : never achieve any form of coordination. > : > : In addition, some particular kinds of requests are explicitly marked > : as beeing > : idempotent. Any request kind that is not marked as idempotent must > : inherently > : be non-idempotent - from the point of view of achieving coordination > : between processes. > : > : > : Turn this around to: "If you want to place the same order twice, PUT > : just won't do it". > : > : Jan > : > : > : > : > : > : > Walden > : > > : > > : > > : > > : > Yahoo! Groups Links > : > > : > > : > > : > : > : __________ NOD32 2349 (20070623) Information __________ > : > : This message was checked by NOD32 antivirus system. > : http://www.eset.com > : > : > > > > > Yahoo! Groups Links > > >
Alan Dean wrote: > On 6/24/07, Elliotte Harold <elharo@...> wrote: > >>Alan Dean wrote: >> >> >>>PUT /robots.txt >>> >>>... replaces the whole file >>> >>>I hope that the above is a trivial enough example to be uncontentious. >>>I would think that everyone can agree that the semantics are not >>>ambiguous for robots.txt >> >>I wish that were true. It is not. >> >>Some people believe that it is within the bounds of the spec for PUT >>/robots.txt to merge the new content with the old content. >> >>I do not so believe, but the APP working group (or at least an apparent >>majority of their members) do. > > > /me nods > >>From the subthread discussion with Nick, I think that I understand now > the basis used to assert that HTTP permits this interpretation of PUT. > > What I don't understand is why it might be considered a good idea to > apply make PUT ambiguous in that way. Where's the advantage, I ask > myself? Why go to all that trouble when you already have an ambiguous > method, POST, sitting there ready to do your bidding as you see fit... One reason is that you're already using POST to mean "add a sub-resource"; also using it to mean "apply a delta" means that you need to switch on the MIME type to determine the basic semantics. And also you can't store a delta document as a sub-resource :). Neither of which are killers of course, but they seem a bit messy. -John
On 6/24/07, Nick Gall <nick.gall@...> wrote: > On 6/24/07, Robert Sayre <sayrer@...> wrote: > > On 6/23/07, John Panzer <jpanzer@...> wrote: > > > > > > Elliotte Harold wrote: > > > > I agree. However some people here do not agree with this, and have > > > > maintained that the server is free to do anything it likes, including > > > > retaining the old categories, adding any new ones, and still returning > a > > > > 200. > > > > > > I think that the majority of the AtomPub working group simply couldn't > figure out how to > > > disallow "bad" server behavior in the spec without also outlawing "good" > server behavior. > > > > The HTTP spec makes exactly this point. That's why it doesn't define > > how a PUT request affects the state of the server. > > > > I find it very puzzling that Julian is the only other person in this > > thread that seems to understand that the semantics of PUT are > > unambiguous, while the requirements on the servers are completely > > undefined. > > > > 1.) the semantics of PUT are unambiguous > > 2.) requirements on servers receiving PUT requests are undefined > > > > Both are true, and #2 does not change #1. Understanding that these two > > facts can be simultaneously true is key to understanding HTTP. It's > > not a problem, and it's not underspecified. > > > > I agree that (1) and (2) are true and (2) does not change (1). But way back > in the thread, you appeared to want to add a third constraint > > 3.) "[O]missions in a client PUT message [mean] unset those portions"; > omission does "not mean only update the included elements." > > (1), (2), and (3) can NOT be all true. (3) contradicts (2) because it > defines "requirements on servers receiving PUT requests". > > The ambiguity I've been referring to all along is the ambiguity between (2) > and (3). Some people think PUT defines the requirement of replacement > semantics (the 3 camp) and some people think the choice between replacement > and merge are undefined (left open to the parties applying HTTP) (the 2 > camp). I thought Robert was in the (3) camp. You can't be in both. Yes, you can. That is the understanding that is missing. The messages must have unambiguous semantics in order to build working systems, and implementations must be free to react as they please in order to build maintainable systems. The completely clear replacement semantics of PUT messages place no requirements on servers, but they do create user expectations. Meeting user expectations, aka "not sucking", is not something you can put in the wire protocol. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
On 6/24/07, John Panzer <jpanzeracm@...> wrote: > Alan Dean wrote: > > On 6/24/07, Elliotte Harold <elharo@...> wrote: > > > >>Alan Dean wrote: > >> > >> > >>>PUT /robots.txt > >>> > >>>... replaces the whole file > >>> > >>>I hope that the above is a trivial enough example to be uncontentious. > >>>I would think that everyone can agree that the semantics are not > >>>ambiguous for robots.txt > >> > >>I wish that were true. It is not. > >> > >>Some people believe that it is within the bounds of the spec for PUT > >>/robots.txt to merge the new content with the old content. > >> > >>I do not so believe, but the APP working group (or at least an apparent > >>majority of their members) do. > > > > > > /me nods > > > >>From the subthread discussion with Nick, I think that I understand now > > the basis used to assert that HTTP permits this interpretation of PUT. > > > > What I don't understand is why it might be considered a good idea to > > apply make PUT ambiguous in that way. Where's the advantage, I ask > > myself? Why go to all that trouble when you already have an ambiguous > > method, POST, sitting there ready to do your bidding as you see fit... > > One reason is that you're already using POST to mean "add a > sub-resource"; also using it to mean "apply a delta" means that you need > to switch on the MIME type to determine the basic semantics. And also > you can't store a delta document as a sub-resource :). "The action performed by the POST method might not result in a resource that can be identified by a URI." http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5 The resource|representation semantics of POST are so ambiguous that I sometimes wonder if the name should have been MISC Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
Alan Dean wrote: > On 6/24/07, John Panzer <jpanzeracm@...> wrote: >> Alan Dean wrote: >> > On 6/24/07, Elliotte Harold <elharo@...> wrote: >> > >> >>Alan Dean wrote: >> >> >> >> >> >>>PUT /robots.txt >> >>> >> >>>... replaces the whole file >> >>> >> >>>I hope that the above is a trivial enough example to be >> uncontentious. >> >>>I would think that everyone can agree that the semantics are not >> >>>ambiguous for robots.txt >> >> >> >>I wish that were true. It is not. >> >> >> >>Some people believe that it is within the bounds of the spec for PUT >> >>/robots.txt to merge the new content with the old content. >> >> >> >>I do not so believe, but the APP working group (or at least an >> apparent >> >>majority of their members) do. >> > >> > >> > /me nods >> > >> >>From the subthread discussion with Nick, I think that I understand now >> > the basis used to assert that HTTP permits this interpretation of PUT. >> > >> > What I don't understand is why it might be considered a good idea to >> > apply make PUT ambiguous in that way. Where's the advantage, I ask >> > myself? Why go to all that trouble when you already have an ambiguous >> > method, POST, sitting there ready to do your bidding as you see fit... >> >> One reason is that you're already using POST to mean "add a >> sub-resource"; also using it to mean "apply a delta" means that you need >> to switch on the MIME type to determine the basic semantics. And also >> you can't store a delta document as a sub-resource :). > > "The action performed by the POST method might not result in a > resource that can be identified by a URI." > > http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5 > > The resource|representation semantics of POST are so ambiguous that I > sometimes wonder if the name should have been MISC Right. I was thinking specifically about the AtomPub profile of HTTP, in which POST on certain URIs has specific semantics; a POST to something defined as a "collection URI" is supposed to create a new sub-resource with its own URI. However the "collection URI" may also respond to GET with a collection of all the sub-resources in feed format. PUT isn't defined for it by AtomPub but if it were defined it would be natural to allow a PUT of an entire collection to replace them all at once (for example to ensure consistency). But then if you wanted to do a "delta update" of just a subset of the collection with one operation, POST is already spoken for. You could of course do a delta update if the MIME type is X and add a sub-resource if MIME type is Y. (AtomPub allows POSTing of arbitrary resource types to create sub-resources.)
> One reason is that you're already using POST to mean "add a > sub-resource"; also using it to mean "apply a delta" means > that you need to switch on the MIME type to determine the > basic semantics. And also you can't store a delta document > as a sub-resource :). Why not? That might actually be very useful. I was going to suggest elesewhere on this much too long thread that the client could PUT a diff to a resource that stood for the delta between the current version and the 'next' (non-existent) version. This would cause the 'next' version to come into existence, and it's content would be the combination of the submitted delta and the previous version. Having these 'delta' resources would also allow a client to retrieve a delta as well as PUT a delta.
On 6/24/07, John Panzer <jpanzer@...> wrote: > > It seems to me that Content-Range: on PUT, if supported, means that the > client is no longer replacing the entire resource, only part of it. So > the new state depends on the old state and the state transmitted by the > PUT. Which is what the anti-partial-PUTtians disagree with strongly, PUT requests that are edits usually depend on the old state. This is why they usually include If- headers. > so > it's interesting to note that the base HTTP spec at least appears to > specify a mechanism for doing it. Not really. Someone goes down this path every year or so and hits a dead end at some point. :) It turns out to be easier to give things separate URIs if you want to target them separately. Content-Range for PUTs isn't very widely implemented, and it's often buggy when it is supported. But, it is *generic*: a server can implement it without understanding application semantics or a special media type. Note that it can only do contiguous ranges and there's no danger of a compliant server storing the "wrong" data, as there would be if a delta format was sent as the body of a PUT request. You could invent a new range specifier or Content-* header, but PATCH seems easier to implement to me. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
> But then if you wanted to do a "delta update" of just a > subset of the collection with one operation, POST is already > spoken for. You could of course do a delta update if the > MIME type is X and add a sub-resource if MIME type is Y. > (AtomPub allows POSTing of arbitrary resource types to create > sub-resources.) Define a resource for that subset of the collection. Resources are cheap.
On 6/24/07, Mike Dierken <dierken@...> wrote: > > > But then if you wanted to do a "delta update" of just a > > subset of the collection with one operation, POST is already > > spoken for. You could of course do a delta update if the > > MIME type is X and add a sub-resource if MIME type is Y. > > (AtomPub allows POSTing of arbitrary resource types to create > > sub-resources.) > Define a resource for that subset of the collection. Resources are cheap. /me nods That's certainly the position of the W3C TAG Finding "On Linking Alternative Representations To Enable Discovery And Publishing" http://www.w3.org/2001/tag/doc/alternatives-discovery.html Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
Mike Dierken wrote: >> But then if you wanted to do a "delta update" of just a >> subset of the collection with one operation, POST is already >> spoken for. You could of course do a delta update if the >> MIME type is X and add a sub-resource if MIME type is Y. >> (AtomPub allows POSTing of arbitrary resource types to create >> sub-resources.) >> > Define a resource for that subset of the collection. Resources are cheap. > One thing I'm concerned about in this approach is the impact on If-Match. Presumably you need to retrieve that subset before PUTting a modified version back, even if you already have the full representation from the main resource. (To get the ETag for the subset resource.) Not sure how you'd be able to say "change the atom:title, but only if nobody else has messed with the the rest of the atom:entry since I retrieved the full representation" without round-tripping all the fields. The other solutions handle this automatically, I think. Of course the other solutions (Content-Range:, PATCH, and POST-a-delta) have the opposite problem: How do you do an update on just one field while avoiding the lost update problem? In some cases this isn't an issue because you intentionally changing just the fields you want changed, and you want last update wins, or because you're appending (e.g., atom:category). You also need to define some method of server capability/related URI discovery and some language(s) to specify the subsets. Web3S seems to solve this by extending ETags so they can be hierarchical, and you can use an ETag from one resource with a child resource. I'm not sure how I feel about that solution yet. It feels odd. -John
Robert Sayre wrote: > 1.) the semantics of PUT are unambiguous > 2.) requirements on servers receiving PUT requests are undefined > That's not really true. There aren't any requirements on what servers can do as a result of a PUT request, but there are requirements on what response codes they can send back depending on what they decide to do. A server that rejects a client's unambiguous PUT request should not send back a 200 OK anyway. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
> > Define a resource for that subset of the collection. Resources are cheap. > One thing I'm concerned about in this approach is the impact on If-Match. > Presumably you need to retrieve that subset before PUTting a modified version back, > even if you already have the full representation from the main resource. > (To get the ETag for the subset resource.) Use HEAD, I think that will/should return the ETag. > You also need to define some method of server capability/related URI discovery and > some language(s) to specify the subsets. Yep. > Web3S seems to solve this by extending ETags so they can be hierarchical, and you can > use an ETag from one resource with a child resource. > I'm not sure how I feel about that solution yet. It feels odd. I hadn't noticed that part. I haven't read Web3S too closely. (Isn't it odd that their name is so close to S3 ?) It is a gnarly problem when you want to support these sort of data graphs - resources with sub-properties which may or may not be exosed as resources. I favor the approach of resource-ifying as much data as you want to be modifiable.
On 6/24/07, Elliotte Harold <elharo@...> wrote: > > That's not really true. There aren't any requirements on what servers > can do as a result of a PUT request, but there are requirements on what > response codes they can send back depending on what they decide to do. Yes, there are a few requirements given in the definition, but none of them seem to relate to this thread. Success codes for success, Error codes for errors--I didn't think we were arguing that. Does the server get to define "success"? Absolutely. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Mike Dierken wrote: >>>Define a resource for that subset of the collection. Resources are > > cheap. > > >>One thing I'm concerned about in this approach is the impact on If-Match. > > >>Presumably you need to retrieve that subset before PUTting a modified > > version back, > >>even if you already have the full representation from the main resource. >>(To get the ETag for the subset resource.) > > Use HEAD, I think that will/should return the ETag. Sure, but then you have no guarantee that the ETag you see on HEAD matches the data version that you GOT from the primary URI. A; GET primary URI (get value X for field Y) B: PUT primary URI A: HEAD subset URI (get ETag corresponding to B's update of value Z for field Y) A: PUT to subset URI (sending ETag, overwriting B's update) > ... > >>Web3S seems to solve this by extending ETags so they can be hierarchical, > > and you can > >>use an ETag from one resource with a child resource. >>I'm not sure how I feel about that solution yet. It feels odd. > > I hadn't noticed that part. I haven't read Web3S too closely. (Isn't it odd > that their name is so close to S3 ?) Yes, that's driving me nuts. > > It is a gnarly problem when you want to support these sort of data graphs - > resources with sub-properties which may or may not be exosed as resources. I > favor the approach of resource-ifying as much data as you want to be > modifiable. > >
I'm migrating my game site to a RESTful web service and have a more specific implementation question regarding authentication. I want GET to be open to all with only PUT, POST, and DELETE restricted to registered users. Ideally I would like to both be able to ask for a username/password for certain resources using Basic Auth and be able to use SSL certificates for those users that want them. I'm using Apache 2.2. The problem is I can make the Basic Auth work, and I can make the SSL certs work, but I can't seem to find anyway to make them *both* work (either/or that is). Can anyone point me to a HOWTO or another thread that might discuss this? Thanks for your help! -- Aaron Dalton | Super Duper Games aaron@... | http://superdupergames.org
Walden Mathews | : You can't do: PUT http://www.marcdegraauw.com/friend/markbaker, ... | In concrete terms, "you can't do" means what? True, this was a bad example. | And I think what you and others on this list are still | missing is that there is | no "non-idempotent" constraint anywhere. It would be lunacy. I think that's exactly what I've been saying from the start of this thread. I think Alan Dean and Julian Reschke made similar remarks. Marc
I am really struggling with Yahoo groups on using this email list. No
other email list I'm subscribing to is this troublesome, and I'm
subscribed to quite a lot.
I keep getting "Your messages are bouncing" notices unsubscribing me
from the group, but when visiting the "bouncer" page it doesn't tell
me which messages are bouncing (just a message number), so it's
difficult for me to report this on to my email administrators. (those
messages bounce because they get too high a spam score).
Half of the messages to the list contains quoted HTML with menu items
intended for the original recipient ("Visit your group" etc). (even
if I've set my format to "Traditional" - the problem is that not
everyone is on traditional, and when they reply they include the menus)
The web accessible archive I can understand gave some added value at
some point, but these days the interface just feels extremely slow
and horrible to use compared to say Google groups, it doesn't even
understand threading properly.
When people try to include code examples of XML/Java/Python their
indention is messed up, making the examples unreadable, and I think
this is quite an important feature on a technical email list.
Why was Yahoo groups chosen for the REST discussions? Is it possible
to review that choice?
--
Stian Soiland
I was struggling with a problem with my Eclipse installation that
turned out to be due to non-rest-ful behaviour of our wireless
network router at the University.
Our wireless network is like this:
Before you have been "authenticated" any http requests to ANY address
will redirect with 302 Found to the login page, while any other ports
are firewalled.
If the user's browser tries to access Google or similar it will
redirect to the login page, where filling in username and password,
and then pressing submit, opens up access. (Ideally the proxy should
redirect to the original request from this stage, but I this
functionality is implemented by the system administrators yet.)
From now on all HTTP access from that client's IP (or MAC) address
is allowed unrestricted and behaves as normal.
So this is not a normal HTTP proxy, but intercepting on the TCP/IP
layer. After authentication everything works as if I'm on the wire. I
think it's quite a cute solution for the basic cases, as there's no
configuration required and you only need to login once a day.
Now the problem is this redirection-thing. What if the client is not
a human-controlled browser, but a program? In my case the program was
Eclipse, trying to download an XML Schema file. Here's a reproduction
of what probably happened (note that http://
zone1.cwg.its.manchester.ac.uk/ is only accessible from our wlan)
: stain@mira ~/Desktop;curl -vL
http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd
* About to connect() to java.sun.com port 80
* Trying 72.5.124.55... * connected
* Connected to java.sun.com (72.5.124.55) port 80
> GET /xml/ns/persistence/persistence_1_0.xsd HTTP/1.1
User-Agent: curl/7.13.1 (powerpc-apple-darwin8.0) libcurl/7.13.1
OpenSSL/0.9.7l zlib/1.2.3
Host: java.sun.com
Pragma: no-cache
Accept: */*
< HTTP/1.1 302 Found
< Date: Mon, 25 Jun 2007 11:30:41 GMT
< Server: Apache/2.0.54 (Debian GNU/Linux) DAV/2 PHP/4.3.10-16
mod_ssl/2.0.54 OpenSSL/0.9.7e
< Location: http://zone1.cwg.its.manchester.ac.uk/redirect.html
< Content-Length: 235
< Connection: close
< Content-Type: text/html; charset=iso-8859-1
% Total % Received % Xferd Average Speed Time Time
Time Current
Dload Upload Total Spent
Left Speed
0 235 0 0 0 0 0 0 --:--:-- --:--:--
--:--:-- 0* Closing connection #0
* Issue another request to this URL: 'http://
zone1.cwg.its.manchester.ac.uk/redirect.html'
* About to connect() to zone1.cwg.its.manchester.ac.uk port 80
* Trying 10.11.0.25... * connected
* Connected to zone1.cwg.its.manchester.ac.uk (10.11.0.25) port 80
> GET /redirect.html HTTP/1.1
User-Agent: curl/7.13.1 (powerpc-apple-darwin8.0) libcurl/7.13.1
OpenSSL/0.9.7l zlib/1.2.3
Host: zone1.cwg.its.manchester.ac.uk
Pragma: no-cache
Accept: */*
< HTTP/1.1 200 OK
< Date: Mon, 25 Jun 2007 11:30:42 GMT
< Server: Apache/2.0.54 (Debian GNU/Linux) DAV/2 PHP/4.3.10-16
mod_ssl/2.0.54 OpenSSL/0.9.7e
< Last-Modified: Fri, 10 Nov 2006 14:35:49 GMT
< ETag: "24addd-466-77dc5b40"
< Accept-Ranges: bytes
< Content-Length: 1126
< Content-Type: text/html
100 1126 100 1126 0 0 5332 0 --:--:-- --:--:--
--:--:-- 5332* Connection #0 to host zone1.cwg.its.manchester.ac.uk
left intact
* Closing connection #0
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>Network Roaming Service</title>
<meta http-equiv="Content-Type" content="text/html;
charset=iso-8859-1" />
<meta http-equiv="Refresh" content="10;URL=https://
zone1.cwg.its.manchester.ac.uk/" />
<meta content="Copyright 2006 University of Manchester. All rights
reserved." name="copyright" />
<meta http-equiv="Content-Language" content="EN" />
<meta content="NOFOLLOW,INDEX name=robots" />
<link href="style.css" rel="stylesheet" type="text/css" />
</head>
<body>
<div id="content">
<h1>Network Roaming Service</h1>
<p>If your browser supports SSL, please <a href="https://
zone1.cwg.its.manchester.ac.uk/">follow this link</a> or wait ten
seconds to be redirected to a secured version of the Intranet.</p>
<p>If your browser does not support SSL encryption, then you can <a
href="http://zone1.cwg.its.manchester.ac.uk/">follow this link</a>
instead for an <b>unsecured</b> version of the site.</p>
</div>
</body>
</html>
I found this HTML page stored by Eclipse as my referenced "XML schema".
As you see, after the 302 to the authentication server, the login
page is returned with 200 OK. There is no Cache-Control: no-cache,
and the automatic clients are happy and caches the "XML Schema" which
of course won't work later on.
(The page is a actually redirect to the https login form, but this is
done by meta-http-equiv tags that the automatic clients normally
don't evaluate)
I can picture a lot of programs running on client machines these days
that do similar "update checks" and so on. It is not fair for them to
be able to distinguish from a real response and a wlan login page
when no appropriate headers have been supplied.
Thus, how *should* the proxy behave to make life easy both for human
browser users and automatic http clients? I adviced the system
administrators to consider returning with 401 Unauthorized or 407
Proxy Authentication Required and with a no-cache header, but I'm not
very certain if this is the best solution. And is the initial 302
Found redirect from the original page all right, or should it be 303
See other?
--
Stian Soiland, myGrid team
School of Computer Science
The University of Manchester
http://www.cs.man.ac.uk/~ssoiland/
Stian Soiland wrote: > ... Hi, this is an HTTP question, not a REST question. That being said, returning anything except a 4xx or a 5xx is a bug (IMHO). Best regards, Julian
On 6/25/07, Stian Soiland <ssoiland@...> wrote: > Why was Yahoo groups chosen for the REST discussions? Is it possible > to review that choice? I'm thinking it's great (in a way), because it gives people incentive to come help me with Wirebird (a mailing list with a RESTful web interface... among other things, a web forum with features that dial-up BBSi had by 1986, since apparently all *other* web forums were written by people who never heard of Usenet *or* Fidonet). Unfortunately, it's still in alpha...
Stian Soiland <ssoiland@...> writes:
> I keep getting "Your messages are bouncing" notices unsubscribing me
> from the group, but when visiting the "bouncer" page it doesn't tell
I am using an email client to read and post messages in this mailing
list. I do not understand what you meant by ``"Your messages are
bouncing" notices``, but I have not received anything from this
mailing list beside the actual posts.
> Half of the messages to the list contains quoted HTML with menu items
> intended for the original recipient ("Visit your group" etc). (even
> if I've set my format to "Traditional" - the problem is that not
I am not sure what 'traditional' is. I set my email client to prefer
text/plain content. Posts from Yahoo! Group always include both
text/plain and text/html contents. Choose the former if you don't want
the HTML menu and correct identation.
> everyone is on traditional, and when they reply they include the
> menus)
Never seen this. Are you sure you are using the text/plain content?
All posts and replies I have had look just like any email or posts in
Usenet except for the extra junk appended at the end.
> The web accessible archive I can understand gave some added value at
I don't like web archive too. I use yahoo2mbox to move the archive
locally so my email client can read it.
YS.
* Stian Soiland <ssoiland@...> [2007-06-25 14:00]: > Before you have been "authenticated" any http requests to ANY > address will redirect with 302 Found to the login page, while > any other ports are firewalled. That’s wrong. As Julian said, the proxy should return the login page directly, in the body of a 4xx response. Probably 403. (Julian proposed 5xx as an option but that would be wrong, IMO.) The browser will render such a page just fine, as if nothing unusual was going on; other programs, however, will know they’re not dealing with a legitimate response. Some such proxies are even worse – they send 301s. This is really REALLY nasty if you accidentally launch your aggregator behind such a proxy… Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I'm having trouble figuring out how to expose a 2-page (n-page?) "wizard" input form in a
RESTful manner. For example, I'm implementing a referral program where the user enters a
list of names and addresses into an HTML form and then picks from a list of rewards. After
each step, I need to perform server-side validation. Only after the second stage validation
occurs can the "transaction" be considered complete and committed. Here's what I've
considered thus far:
GET /referrals: gets an empty form in which users enter referral names and emails
POST /referrals: submits the referral form data (names and emails) and returns a 201
Location: /referrals/{id}, where {id} is the server generated-id for this set of referrals or a 409
if the validation failed, displaying the original name/email form with inline error messages
GET /referrals/{id}: based on the previous post, the server knows that the referral names and
addresses have been entered, so the user gets a new form to select a reward (essentially
customizing the representation based on the state of the resource)
POST /referrals/{id}: submit _only_ the reward data to update the referral resource (e.g.
partial PUT without the religious war).
Does this sound "correct" or at least reasonable? I would greatly appreciate any alternatives
or suggestions.
Justin
At frevvo, I have a project where I need to update database records using forms. I'm using frevvo's Live Forms product which <a href="http://www.frevvo.com/blog/?p=26">uses REST to interact with back end resources</a>. Instead of a one-off, I decided to try and do something generic since I think we're going to need this often and it's got to be a reasonably common problem. So, I created a database connector which is described briefly in <a href="http://www.frevvo.com/blog/?p=37"this blog article</a> and in more details in the links therein. The connector source code is freely available to anyone who wants to use REST with databases and is based upon Restlet. As of now, it's definitely beta-level code. In a nutshell, you install it and configure some queries (in an XML configuration file). The connector will automatically create REST URIs for the specified queries. I found that this works well for my application (a small part of which is included in the download). See <a href="http://www.frevvo.com/blog/?p=37"the relevant blog article</a> for details, download links, the frevvo application etc. Comments welcome! Thx, -Ashish ----- Ashish Deshpande Founder, frevvo LLC.; http://www.frevvo.com frevvo Blog: http://www.frevvo.com/blog
Patrick Mueller wrote: > w/r/t this (for new) newsgroup post, there's no XML in it. Well, there > is an HTML representation available, primarily so Yahoo! can insert ads > into the post. But I don't see XML containing any 'information' of any > value here. But try marking it, or any other narrative structure, up in JSON. Account for mixed content, repetition, recursive nesting, and all the other structures we find in real human writing. JSON becomes an illegible mess at best. XML doesn't blink. (at least unless overlap rears its head). > When I open my web browser, I'm typically seeing markup (HTML, usually > not XML) which is used primarily for visual rendering. It seems fairly > rare when you find any web content for which the markup contains > semantic value. Microformats are a great counter-example, but are still > not widely adopted. I was hoping you'd provide more examples like > Microformats. So? XML can do this. JSON can't. Try converting HTML to JSON and XHTML some time. See which one is easier to work with. >> Most of the world's information is *not* in relational databases. It's >> not in databases of any kind. It's locked up in books, Word documents, >> PowerPoint presentation (well, I suppose most of those are technically >> information-free :-) ) and so on. You get the idea. > > I don't see how XML helps here; if you want to mine this data, you need > a natural language understanding engine. Do you really think there's > much semantic markup used in existing word and powerpoint documents? More than you'd think, but regardless XML can handle this and JSON can't. > However, I'm not convinced that XML is the be-all, end-all story for > structuring information; in many cases, data can in fact be reduced to > simple data structures; in those cases, I'll claim that there are > easier-to-parse, easier-to-read, easier-to-map-to-progLangs and smaller > renderings of that information. It's not the be-all, end-all solution; but you make it clear yous till don't get it. The problem is most definitely not "structuring information". The problem is understanding information in its unstructured reality. XML is less structured than JSON; and that makes it more powerful, more useful, and more faithful to reality. I still think ultimately we need to come to grips with truly unstructured information, and XML doesn't do that; but it is further along in that direction than JSON is. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
So Julian Reschke shared good thoughts and his experience with caches, extension methods and the like. I second him: we need more data on the problems in currently deployed caches to talk intelligently about things. Hopefully Mark Nottingham or others will share the details of their findings. I'll pick up on this comment though and riff a little. Here goes: > My experience with developing (and supporting) the HTTP server > and client stack in one of SAP's portal products for many years > says that new methods do not cause major problems. That may be > influenced by the fact that most *authoring* goes over HTTPS, > and thus caches won't be able to do any harm. The subtlety of the word "authoring" is the key point here and this subtlety is often lost. What I often find in practice is that many simply go beyond having "authoring over HTTPS" and simply say "everything over HTTPS". I know I had to deal with braindead internal IT corporate standards at Big Blue that really set back that company's embrace of the web by years. "Why is it so slow?" was a common question about web usage at IBM 6 years ago. When I hear people say without qualifying that "SSL/TLS is the answer", what they are really saying is let's remove all visibility to intermediaries. The downside of this as an architectural solution is that you lose the possible benefits of caches and other intermediaries for things other than authoring. And common case performance is degraded. Giving visibility to intermediaries is a Good Thing (TM) and a big part of the web style. Caches aren't the only intermediaries we need to worry about. Firewalls are also problematic. More generally, even though we are architecting for middlemen, we shouldn't be held hostage by them. As an example, as the Atompub WG was working through a few kinks last year, there was the suggestion about possibly apocryphal "concern over firewall configurations that block PUT and DELETE operations" [1] and pointers to temporary workarounds some implementors had introduced in the wild e.g. X-HTTP-Method-Override custom HTTP headers in GData (a practice that is perhaps now deprecated) Now I won't get into that actual case and I'll simply note the fact that Atompub is being standardized with full use of PUT and DELETE, and that implementors haven't seen any deployment issues. But there is an object lesson here. Many have gotten burnt at some point by a recalcitrant intermediary. Sometimes it is poorly configured, sometimes it's a matter of policy. We've all had to route around it. That's why tunneling is a widely used design pattern. In the dark recesses of our collective memory we can all remember the bad cache or firewall. I have many war stories to share about middleboxes gone awry. It's a common enough thing that I note in passing the title of a recent thread: detecting cache malfunctioning. But on to extensibility through new HTTP methods... John Hanna recently put it this way: "Increasing methods is painful compared to increasing headers which is painful compared to increasing document features." The pain of extension methods in HTTP are: 1. comprehension This is a social problem. We value the uniform interface and want to have use cases that have global significance. Getting people to agree on common semantics of anything is an N squared problem. It's a hard thing. 2. implementation Clients and intermediaries often have poor handling of extension methods at least with HTTP. The first time I wrote an HTTP client, the first bug that was filed against it was about the interaction with a caching intermediary. For what it's worth, the second bug was with chunked transfer encoding. It takes Jane Programmer a day with the HTTP 1.1 spec to get a working client or server running, but it takes far longer to get to a good client, let alone a good server, let alone a good cache, proxy or filter something or other. Even with the eyeballs that open source brings, test suites, and the sharpest folks on board, it is a brittle web. The name Apache is testimony to this. Also extensibility tends to be the last 20% of the HTTP implementation and most don't get to it. 3. the "default deny" mentality Many intermediaries, as a matter of policy or explicit configuration, often disable extension methods. i.e. many security folk don't believe in "must ignore" style extensibility and when HTTP came along applied this. Governance and regulation almost seem to lead to filtering paranoia. Deperimeterization sadly hasn't gained wide currency in the security industry. By analogy, our router/NAT/firewall boxes "protect" us but they also inhibit innovation - e.g. I have never successfully configured my Linksys box to share something over Bittorrent, even punching pinholes, upgrading firmware. Default deny can hurt us. 4. misconfiguration It is very easy to misconfigure intermediaries. "What does that button do?" is the source of more outages than I can count. I'm sure there are more pain points with extension methods but note that 3 of the 4 that come to mind involve intermediaries. Another sidenote with some tongue in cheek.... I haven't commented on the issue-du-jour: PUT and it's not because my mails disappeared into the ether. There's probably been several hundred emails on this list about the meanings of PUT and POST just in the past few months. Rather I was wondering about why PUT is a permathread in every mailing list about the web style. "The Ambiguous Semantics Of PUT" sounds an awful lot like "The Unbearable Lightness Of Being" via Milan Kundera, and I feared getting trapped in an existential debate: semantics, being, metaphysics etc. (68 replies and counting) So a question for Nick Gall: care to comment about your catchy title? Was the literary allusion intended or am I reading things too deeply? I have clear notions of the semantics of PUT and on most things in the HTTP 1.1 spec. GET, POST, PUT, DELETE are well-defined but PUT continues to cause trouble for some. So let's have some metaphysics on the four horsemen of the web: * GET is the first thing we do in life, from birth we cry for attention and demand recognition. Humans, like all mammals, are acquisitive beings. Me, me, me. Want, want, want. Get, get, get. The dairy industry's most effective campaign was the Got Milk advertisement series and it is fitting. It's not surprising that 90% of operations on the web, our greatest collaborative platform, are GETs (sidenote: what is the source of that oft-repeated statistic?). It is also not surprising that an architecture that optimizes for GETs will see widespread adoption. * DELETE also is common in human experience, we are always destroying things. Anecdotally, the third phrase my little niece spoke was "it broke". The fourth was "it dropped". "Mum" and "Dad" were the first two. * POST is necessarily opaque and has the virtue of explicitly referring to a social institution that likely dates back to the invention of writing systems. Post and mailing systems are well known and have social meaning in our communities. We have written documents passed through intermediaries for thousands of years. * PUT however gets us into the matter of linguistics. Bear with me for a minute here. In my dictionary (a crummy concise Oxford paperback dictionary, 1983 edition), there are 10 definitions of put. The main definition is reasonable but I think but you can begin to see where Talmudic interpretation might start to creep in. put. v. 1. to move (a thing) to a specified place, to cause to occupy a certain place or position, to send The other definitions are interesting also: 2. to cause to be in a certain state or relationship 3. to subject 4. to estimate 5. to express or state 6. to impose as a tax 7. to stake (money) in a bet 8. to place as an investment 9. to lay (blame) on 10. (of ships) to proceed I'll note that the verb put has the same number of variants (10) as the verb get in this dictionary. The variation in the senses of the variants is worrying however; there seem to be too many nuances. Also, unlike the case of get, money comes into play in the definitions. 3 variants have financial concerns in mind and one even includes the imposition of taxes. Money is a funny thing and will forever cause people to project their received meaning onto words and actions. Others have been pointing to unclear text or ambiguities in the spec, I think fundamentally that it is a matter of language. Some would have preferred update, some would have preferred move etc. Put implies pockets to some and hiding to others. What do we put? And where do we put it? And for how long do we put it? In summary the linguistics of put are really affecting PUT. Anyway I hope this hasn't been too much of a distraction. In parting I'll suggest that: 1. The REST Inquisition [2] was over a matter of interpretation over the phrase "web services" and I believe that debate is over. We even have an O'Reilly book on it. Case closed. 2. The REST Reformation, when it comes, will also be over a matter of interpretation. One of the Theses that our future Martin Luther will stick to the doors of the Catholic Church of REST will be about PUT. The Protestants of REST will be more liberal and expedient about the use of PUT and won't dwell on the so-called "ambiguous semantics of PUT". They'll just use it and see what happens. I think the Atompub folks have a good claim to be the first Protestants of REST. Cheers. Koranteng (ducks) [1] http://www.imc.org/atom-protocol/mail-archive/msg05981.html [2] http://lists.xml.org/archives/xml-dev/200504/msg00244.html -- Koranteng's Toli - the blog edition http://koranteng.blogspot.com/
On 6/23/07, Steve Bjorg <steveb@...> wrote: > I still think your statement is wrong. Said differently, why do you > state that XML has no semantics and JSON does? The only difference in *semantics* between JSON and XML [core] is that JSON has implicit data types as part of the core, while XML uses namespace extensions for the same. Seriously though, what do people actually mean by "XML"? It's a big standard with lots of extensions and sub-specifications. XML (as a big standard) have more semantics than JSON, but as a simple syntactic format it has less. But c'mon, folks; data type semantics are the most boring and trivial of all semantics we usually have to deal with, and out-of-the-box neither JSON nor XML deliver seriously useful semantics. (Although xml:id and xml:idref are sometimes quite sexy) Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchymist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
On 6/25/07, Koranteng Ofosu-Amaah <koranteng@...> wrote: > So Julian Reschke shared good thoughts and his experience with caches, > extension methods and the like. I second him: we need more data on the > problems in currently deployed caches to talk intelligently about > things. Hopefully Mark Nottingham or others will share the details of > their findings. > > I'll pick up on this comment though and riff a little. Here goes: > > > My experience with developing (and supporting) the HTTP server > > and client stack in one of SAP's portal products for many years > > says that new methods do not cause major problems. That may be > > influenced by the fact that most *authoring* goes over HTTPS, > > and thus caches won't be able to do any harm. > > The subtlety of the word "authoring" is the key point here and this > subtlety is often lost. I have to admit that I really don't understand the distinction being made here. What exactly does "authoring" mean and why is it treated differently than everything else. > > What I often find in practice is that many simply go beyond having > "authoring over HTTPS" and simply say "everything over HTTPS". I know > I had to deal with braindead internal IT corporate standards at Big > Blue that really set back that company's embrace of the web by years. > "Why is it so slow?" was a common question about web usage at IBM 6 > years ago. > > When I hear people say without qualifying that "SSL/TLS is the > answer", what they are really saying is let's remove all visibility to > intermediaries. The downside of this as an architectural solution is > that you lose the possible benefits of caches and other intermediaries > for things other than authoring. And common case performance is > degraded. > Actually, SSL doesnt just remove visibility for intermediaries, it pretty much removes the possibility of even having intermediaries (at least the transparent kinds). Problem is, when you start talking about security, intermediaries become a liability - that's why SSL tunnels through proxies. If I'm going to the trouble of encrypting my conversation, I probably don't trust any intermediary to see any of the data being exchanged (authoring or otherwise). --Chuck
* Justin Makeig <jm-public@...> [2007-06-25 23:15]: > Does this sound "correct" or at least reasonable? That is, unless I overlooked something in your exposition, how I would do it. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 6/25/07, Justin Makeig <jm-public@...> wrote:
> I'm having trouble figuring out how to expose a 2-page (n-page?) "wizard" input form in a
> RESTful manner.
I haven't seen the "trouble part" :). Looks good, as Aristotle says.
>For example, I'm implementing a referral program where the user enters a
> list of names and addresses into an HTML form and then picks from a list of rewards. After
> each step, I need to perform server-side validation. Only after the second stage validation
> occurs can the "transaction" be considered complete and committed. Here's what I've
> considered thus far:
>
> GET /referrals: gets an empty form in which users enter referral names and emails
> POST /referrals: submits the referral form data (names and emails) and returns a 201
> Location: /referrals/{id}, where {id} is the server generated-id for this set of referrals or a 409
> if the validation failed, displaying the original name/email form with inline error messages
IE6 and earlier used to display the returned representation for 4xx
errors only if they exceeded a certain length. Otherwise IE6 displayed
a custom error message, ignoring your returned entity. Having trouble
finding the right Google search for that behavior. You have to make
your returned entity > e.g. 4096 bytes, or something, for it to
display.
No idea if it's an issue in IE7.
I guess 409 is as good as 400.
> GET /referrals/{id}: based on the previous post, the server knows that the referral names and
> addresses have been entered, so the user gets a new form to select a reward (essentially
> customizing the representation based on the state of the resource)
> POST /referrals/{id}: submit _only_ the reward data to update the referral resource (e.g.
> partial PUT without the religious war).
>
Sounds like you consider 'referral resource' to include the chosen
reward? Nothing wrong with that, just wasn't clear until now.
> Does this sound "correct" or at least reasonable?
Yep.
>I would greatly appreciate any alternatives
> or suggestions.
>
I get the impression you don't feel comfortable with this.
> Justin
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
Hugh Winkler
Wellstorm Development
http://www.wellstorm.com/
+1 512 694 4795 mobile (preferred)
+1 512 264 3998 office
On 6/23/07, Marc de Graauw <marc@...> wrote: > Mark Baker: > > | I emphatically *agree* that idempotency is a quality of an HTTP > | request. In fact, my argument depends on that being the case because > | I claim that just by examining the request (i.e. not by waiting to see > | what happens on the server), one can determine whether it's idempotent > | or not. > > Aren't you defending the (circular) position that a message is > non-idempotent because the chosen method (POST) is non-idempotent, and we > must choose POST because the message semantics are non-idempotent? Yes to the first part, no to the second. > What about this case: I set up a web service where my friends can create > pages on my web server for themselves. As 'friend' qualifies anybody who has > taken an inordinate amount of time to correct my many misunderstandings and > shortcomings, and given your effort on this list, you certainly qualify. So > you can create http://www.marcdegraauw.com/friend/markbaker, and enclose in > the body some comment which appears on the page. My server however creates > the page, which will include links to all my blog entries about you and > more. You can't do: PUT http://www.marcdegraauw.com/friend/markbaker, > because RFC2616 says: "the URI in a PUT request identifies the entity > enclosed with the request" and http://www.marcdegraauw.com/friend/markbaker > does not identify the comment, but the page-to-be-created. You can do: POST > http://www.marcdegraauw.com/friend/ with 'markbaker' and the comment in the > body. But if you POST once, twice or N times, my server will end up in > exactly the same state: once a friend, always a (=1) friend. Sounds pretty > idempotent to me. I think a PUT would work there fine, as I can PUT the comment and then you can pad it with your links and save it your filesystem or whatever. Mark.
On 6/25/07, Chuck Hinson <chuck.hinson@...> asked: > What exactly does "authoring" mean and why is it treated differently than everything else. about:Julian's observation > > > My experience with developing (and supporting) the HTTP server > > > and client stack in one of SAP's portal products for many years > > > says that new methods do not cause major problems. That may be > > > influenced by the fact that most *authoring* goes over HTTPS, > > > and thus caches won't be able to do any harm. I guess it comes down to tolerance for stale data... When it comes to GETs we are pretty tolerant, but for anything else - and that is what authoring is (post, put, delete, all those webDav methods, synchronization etc.), you do care about stale data. Basically it's because it means you'll have to waste your time investigating or fixing it with those you're collaborating/trading/communicating with. Basically we're all lazy and our time is precious. Gabriel Garcia Marquez has a saying that "any idea which couldn't stand a few decades of neglect is not worth anything" That however was before the internet came along and timescales have compressed since. My guess is that for most things in life, and certainly on the web, 20 minutes is fine - I say that because stock quotes are typically delayed 15 to 20 minutes and that's all about money - we really care about money. Unless you have specialized needs, say buying stocks, or arbitrage in financial markets or say you're one of those bloggers who live by Pagerank and breathlessly check Technorati every 5 minutes salivating over every trackback and such (Attention! Me! Get!), you could do with stale data for GETs. A decade of living with the web has shown that real time is overrated. Of course, you're only human and time is short, you still have low tolerance for poor latency, you do need the sub-second response for the Google query and that YouTube video should start streaming within 15-20 seconds tops. The other thing that is compressing timescales further is mobility and location awareness. Sometimes you do want to GET information relevant for here and now. But even that is a second order effect, we're pretty tolerant of staleness. At least that's my take... -- Koranteng Observers are worried - http://koranteng.blogspot.com/
[ Attachment content not displayed ]
http://www.ddj.com/dept/webservices/199902676 regards -- Dave Pawson XSLT XSL-FO FAQ. http://www.dpawson.co.uk
On 6/26/07, Nick Gall <nick.gall@...> wrote: > > I'm sorry but Occam's Razor > tells me there is a simpler way to explain the text of the spec: it's > ambiguous. Maybe read the thesis again? The simplest way to explain the text is exactly what I've written. Maybe I've gone into too much detail about the consequences. Anyway, it's simple: protocol design should concentrate on the semantics of the messages as much as possible, and specify the behavior of the recipients as little as possible. This allows recipients to evolve or be replaced over time. This is one of the chief aims of REST, and HTTP is a decent example. Non-uniform semantics for a given request method rob the messages of self-descriptiveness, and make it way, way more difficult to substitute one HTTP server implementation for another. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Justin Makeig wrote:
> I'm having trouble figuring out how to expose a 2-page (n-page?) "wizard" input form in a
> RESTful manner. For example, I'm implementing a referral program where the user enters a
> list of names and addresses into an HTML form and then picks from a list of rewards. After
> each step, I need to perform server-side validation. Only after the second stage validation
> occurs can the "transaction" be considered complete and committed. Here's what I've
> considered thus far:
>
> GET /referrals: gets an empty form in which users enter referral names and emails
> POST /referrals: submits the referral form data (names and emails) and returns a 201
> Location: /referrals/{id}, where {id} is the server generated-id for this set of referrals or a 409
> if the validation failed, displaying the original name/email form with inline error messages
> GET /referrals/{id}: based on the previous post, the server knows that the referral names and
> addresses have been entered, so the user gets a new form to select a reward (essentially
> customizing the representation based on the state of the resource)
> POST /referrals/{id}: submit _only_ the reward data to update the referral resource (e.g.
> partial PUT without the religious war).
Why not just:
GET /referals
POST /referrals - 200 with entity including the data from the request in
hidden fields.
POST /referrals_step_2 - 303 to created resource.
This means that no resource is created in the intermeditary stage, so
the transaction being abandoned has no side-effects, even in so far as
orphaned resources.
The big downside to acting directly on an entity retrieved from a POST
is that since does not represent a resource, but rather the results of
an entity being POSTed to a resource, it can not be referred to
(bookmarks etc.). However in this case we are only interested in this
half-constructed item while we are using this wizzard. There is no value
in a half-constructed resource outside of this context, so no value in
creating a resource at this stage.
* Jon Hanna <jon@...> [2007-06-26 11:25]: > Why not just: > GET /referals > POST /referrals - 200 with entity including the data from the > request in hidden fields. > POST /referrals_step_2 - 303 to created resource. > > This means that no resource is created in the intermeditary > stage, so the transaction being abandoned has no side-effects, > even in so far as orphaned resources. > > The big downside to acting directly on an entity retrieved from > a POST is that since does not represent a resource, but rather > the results of an entity being POSTed to a resource, it can not > be referred to (bookmarks etc.). However in this case we are > only interested in this half-constructed item while we are > using this wizzard. There is no value in a half-constructed > resource outside of this context, so no value in creating a > resource at this stage. There is another downside: if you hit Back from the 303 target, the browser will ask you to resubmit the form data to `/referrals` and will refuse to render the page if you cancel. Of course this is easily avoided: you are not actually affecting the state of any resource on the server, so why use POST? Just use GET in the second step. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Steve Loughran <steve.loughran.soapbuilders@...> [2007-06-20 17:20]: > On 6/19/07, A. Pagaltzis <pagaltzis@...> wrote: > >Now if only those darn distributed object systems worked… > > to be fair, they do sometimes work in a (reliable) LAN, though > it depends on you having the ability to keep every version of > the software in perfect sync, which currently means dynamic > classloading, and it does also need developers to think of (and > test for) distribution right from the outset. What you end up > doing is creating one single application that spans multiple > machines, sharing code as well as data across them, and hoping > your dist-object framework of choice can handle distributed GC > with some bounded reliability. If you can, somehow, guarantee that most of the fallacies of distributed computing actually hold in a particular situation, then you can indeed write a “distributed” system as if it were running on a single machine. Because then it *is* running on a single machine, albeit an unusual one. It’s not really any more distributed than code running in an SMP system is, and the platform requires similar engineering effort and cost as building a hardware fault tolerant large-scale SMP machine. > What they dont do is scale to the long haul, or across versions > and applications Yeah – implicit in my statement was a trailing “on the web.” Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Jan Algermissen <algermissen1971@...> [2007-06-21 07:45]: > Next potential problem: 303 does explicitly not license the > client to infer that the 303 Location URI is 'a substitute > reference for the originally requested resource'[1]. > > So, when Joe writes: > > "In the case of a successful, or duplicate, request the client > will be directed to the corresponding open_order." > > The client actually cannot infer that the 303 Location > identifies the corresponding resource. Is that a problem? > Wouldn't the client at some point need a 301 to update its > local reference to the order? I think you are right. Is a Location header valid in a 204 response? If so, that is what I would suggest instead. Otherwise, the server should respond 204, and the client would have to re-GET the resource upon 204, yielding a 301. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Julian Reschke <julian.reschke@...> [2007-06-19 09:10]: > Karen wrote: > > On 6/18/07, Julian Reschke <julian.reschke@ gmx.de > > <mailto:julian.reschke%40gmx.de>> wrote: > > > > > What feed information? Please clarify... > > > > Clarification: the question mark means I'm guessing. I'm not > > referring to anything existing like the APP (which I haven't > > delved deeply into just yet), just the general case for MOVE. > > > > Entry is a resource, part of entry's representation includes > > the feed's identification. Server gets the entry PUT back to > > it, sees that the difference is the feed, and so it moves it. > > Internally it may be nothing like just changing a "feed" > > field in an "entries" table, but as far as the client is > > concerned, that's what you want to change about the resource, > > so that's what it looks like. > > > > There's other ways to move things that still don't require a > > MOVE action to be created. If all else fails, DELETE from the > > old, POST to the new. > > Well, there are a few problems with that. > > 1) AFAICT, there is no feed information in the entry. Thus, APP > would need to add that. Too late. It’s called atom:source. > 2) Even if it were there, APP would need to define what it > means to edit it for clients to be able to rely on it. True, but I don’t see its absence in the spec as discouragement. Atompub is intentionally silent on many issues simply because we don’t know which particular designs will work best in practice. Experimentation and adaption is much of the point of Atompub – it’s a reasonable baseline so apps with congruent problem domains don’t need to reinvent the wheel. But it’s not a turnkey solution in any use case other than the simplest of CMSs. > 3) And even then, it's not really clear how this would work for > entries that are supposed to appear in multiple feeds. > > So you'd end up with lots of additional specification text to > define a functionality that is already defined in a separate > related standards track IETF spec. Note that I don’t disagree with this. A MOVE method would be valuable anyway. I just wanted to set a few points right about Atompub. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Nick Gall <nick.gall@...> [2007-06-24 14:20]:
> On 6/24/07, Robert Sayre <sayrer@...> wrote:
> >The HTTP spec makes exactly this point. That's why it doesn't
> >define how a PUT request affects the state of the server.
> >
> >I find it very puzzling that Julian is the only other person
> >in this thread that seems to understand that the semantics of
> >PUT are unambiguous, while the requirements on the servers are
> >completely undefined.
> >
> >1.) the semantics of PUT are unambiguous
> >2.) requirements on servers receiving PUT requests are
> >undefined
> >
> >Both are true, and #2 does not change #1. Understanding that
> >these two facts can be simultaneously true is key to
> >understanding HTTP. It's not a problem, and it's not
> >underspecified.
>
> I agree that (1) and (2) are true and (2) does not change (1).
> But way back in the thread
> <http://tech.groups.yahoo.com/group/rest-discuss/message/9120>,
> you appeared to want to add a third constraint
>
> 3.) "[O]missions in a client PUT message [mean] unset those
> portions"; omission does "not mean only update the included
> elements."
>
> (1), (2), and (3) can NOT be all true. (3) contradicts (2)
> because it defines "requirements on servers receiving PUT
> requests".
No, it doesn’t.
(3) is what the client means by omitting certain parts. It does
not change (2), which says the server may choose to act upon the
request in any way it desires, including ignoring such an
omission.
> The ambiguity I've been referring to all along is the ambiguity
> between (2) and (3). Some people think PUT defines the
> requirement of replacement semantics (the 3 camp) and some
> people think the choice between replacement and merge are
> undefined (left open to the parties applying HTTP) (the 2
> camp). I thought Robert was in the (3) camp. You can't be in
> both.
No.
The missing understanding is that RFC 2616 talks about what the
client means, not what the server must do.
Take GET as an example. How do you make it completely free of
side effects? Servers keep logs; reading a file on the server
changes its access time; etc. In practice, every request, even
a “safe and idempotent” one, *inevitably* has side effects in
any application that does something interesting.
However!
RFC 2616 specifies that the client cannot be held responsible if
a GET initiates destructive changes. The meaning of the request
is clear and unambiguous.
And with that we return to PUT: RFC 2616 is perfectly clear that
by using PUT, the client means that omitted parts of the entity
are to be removed. This is 100% unambiguous. By PUT the client
means “replace.”
What RFC 2616 does not say is to what extent the server must
honuor this request. It’s completely within the server’s rights
to retain certain parts of the resource if it so chooses.
Here, too, however, the client cannot be held responsible for
this. The client’s request has unambiguous meaning.
In practice, the extent of a server’s deviation from the client
request will depend on what is implementable on the server –
f.ex., if the client PUTs an XML document on an XML DB server, it
is sane to expect that the server will subsequently return a
representation that is equivalent to the client’s in terms of the
infoset, but probably unreasonle to expect that it will be
bit-for-bit identical. However, even though the server modified
the client’s entity in a strict sense, this wouldn’t really be an
issue.
In another message, Nick writes:
* Nick Gall <nick.gall@...> [2007-06-26 08:20]:
> So let me guess this straight:
>
> 1. The HTTP spec clearly requires replacement semantics for
> PUT
> 2. But the HTTP spec also clearly allows servers to ignore
> such semantics if they choose
> 3. Because of (1), users of PUT will expect servers to adhere
> to replacement semantics
> 4. Because of (3), users will think that servers that don't
> adhere to replacement semantics "suck"
>
> And you think such convoluted logic is explicit intent of the
> writers of the HTTP spec? Riiiight (using my best Dr. Evil
> impersonation).
There’s nothing convoluted about it. Different applications vary
in the specific strong assurance they need from the server.
F.ex., while it is reasonable to expect that an XML DB server
will store arbitrary XML faithfully enough for infoset
equivalence, it is an unreasonable burden to expect Atompub
servers to always preserve all Extension Elements found in an
Entry Document. OTOH, a weblog Atompub implementation will
probably store application/*+xml Media Resources bit-for-bit,
which is too much to ask of the XML DB.
Therefore, leaving the server free to judge the extent to which
it can honour a client’s request in this way is vital to the
protocol’s implementability. If HTTP forced servers to make any
strong promises up front, it would be too costly to implement in
nearly all of the scenarios it is currently applied to.
Instead, the spec remains silent on that point and leaves users
of HTTP to figure out what assurances they specifically need in
their particular use case. The same quirk may lead the users of
one server implementation to consider it sucky, while the users
of another server implementation may have no problem with it or
may even *want* that particular quirk – the difference is in the
use case of each group.
Now, in another message, Jon wrote:
* Jon Hanna <jon@...> [2007-06-06 15:35]:
> When we put we transfer *a* representation of the resource from
> client to server, just like when we GET we transfer *a*
> representation of the resource from server to client.
>
> Since a resource can have more than one representation, and we
> can only ever PUT one representation, any PUT is potentially
> affecting an innumerable number of representations as these may
> all depend on the server's knowledge of the resource - which we
> have just changed.
>
> All PUTs are therefore partial in this way.
>
> Following from that there is no reason why one may not send a
> representation that omits some information (it is indeed very
> common for one representation of a resource to contain
> information another does not). There is nothing faulty with
> such a representation and therefore no reason why it may not be
> used.
>
> Therefore whether partial PUTs may or may not be used becomes
> solely a matter of whether partial knowledge of a
> representation may be expressed in a particular content type.
>
> One can also do partial PUTs using content-range but this
> either requires either the entity to be of a type where
> over-writing a fixed number of octets makes sense, or else the
> use of a custom range-unit.
As you might guess at this point, I consider the “partial” issue
a red herring. It doesn’t matter if the representation
transmitted is unable to express the full state of the resource.
In general:
When a client issues a PUT, it means that the new state of
the resource is to be dependent solely on the content of the
enclosed entity.
Whether the *server* merges in parts of the previous state of the
resource is at the server’s discretion.
If you want the *client* to be able to express that the enclosed
entity should be applied to the previous state of the resource in
order to yield the new state, then you should habe the client use
a verb other than PUT. Like, I dunno, PATCH.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
I agree that PUT has replacement semantics, I also state that this is not to be overridden by the media type. A media type is a data format, not a description of a networking API. RFC 2616 clearly leaves room for the following scenario, which does not have merge semantics, without being ambiguous that the intent is replacement semantics. IOW, a partial PUT is not always a merge. To explain the following example, the HTTP server applies an output transformation to co.xml but the FTP server does not. Further, the HTTP server only cares about the contents of <body> as transformation input. ----- GET http://example.org/state/co.xml HTTP/1.1 200 OK <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns='http://www.w3.org/1999/xhtml' xml:lang='en'> <head> <title>Colorado</title> </head> <body> <dl> <dt>State Song</dt> <dd>Where the Columbines Grow</dd> </dl> </body> </html> ----- PUT http://example.org/state/co.xml <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns='http://www.w3.org/1999/xhtml' xml:lang='en'> <head> <title>Montana</title> </head> <body> <dl> <dt>State Song</dt> <dd>Where the Columbines Grow</dd> </dl> </body> </html> HTTP/1.1 204 No Content ----- GET http://example.org/state/co.xml HTTP/1.1 200 OK <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns='http://www.w3.org/1999/xhtml' xml:lang='en'> <head> <title>Colorado</title> </head> <body> <dl> <dt>State Song</dt> <dd>Where the Columbines Grow</dd> </dl> </body> </html> ----- GET ftp://example.org/state/co.xml <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns='http://www.w3.org/1999/xhtml' xml:lang='en'> <head> <title>Montana</title> </head> <body> <dl> <dt>State Song</dt> <dd>Where the Columbines Grow</dd> </dl> </body> </html> ----- PUT http://example.org/state/co.xml <body xmlns='http://www.w3.org/1999/xhtml' xml:lang='en'> <dl> <dt>State Song</dt> <dd>Where the Columbines Grow</dd> <dd>Rocky Mountain High</dd> </dl> </body> HTTP/1.1 204 No Content ----- GET http://example.org/state/co.xml HTTP/1.1 200 OK <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns='http://www.w3.org/1999/xhtml' xml:lang='en'> <head> <title>Colorado</title> </head> <body> <dl> <dt>State Song</dt> <dd>Where the Columbines Grow</dd> <dd>Rocky Mountain High</dd> </dl> </body> </html> ----- GET ftp://example.org/state/co.xml <body xmlns='http://www.w3.org/1999/xhtml' xml:lang='en'> <dl> <dt>State Song</dt> <dd>Where the Columbines Grow</dd> </dl> </body> </html> ----- In that last PUT example, I save a lot of bytes. If I were to GET exactly that representation back from the server over HTTP it would be a full representation of the resource in question. But this is not required, nor are merge semantics required on PUT, nor is there any ambiguity in RFC 2616's definition of PUT as having replacement semantics regardless of media type. The question is, can PATCH be made more efficient to basically accomplish the same thing as that last PUT: ----- PATCH http://example.org/state/co.xml Content-Type: ??? <dd>Rocky Mountain High</dd> HTTP/1.1 204 No Content ----- Or, would the delta encoding be overly complex and take more bytes than the PUT? Particularly if we want that PATCH to be an UPDATE instead of having two State Songs like Colorado has had since Friday the 13th of March, 2007. Anyway, in my example the replacement semantics of PUT are apparent whether the replacement is full or partial, and is not the same thing as a merge. -Eric
The title of this message is "To PATCH things right" because the final GET in my example should have been: ----- GET ftp://example.org/state/co.xml <body xmlns='http://www.w3.org/1999/xhtml' xml:lang='en'> <dl> <dt>State Song</dt> <dd>Where the Columbines Grow</dd> <dd>Rocky Mountain High</dd> </dl> </body> </html> ----- Sorry, Eric
Since I've never pursued web services / SOAP I'm never sure how useful it is to try to explain REST in SOA terms. But I have to disagree with the example URL given in the article. http://humanresources.com/benefits?user=<USER_SSID> If it had stopped there it would make sense. But then "type" is added to the end and REST goes out the window, because if user 123-45-6789 becomes a part-time employee and later retires then his URL will change twice. The status of the employee is exactly the sort of thing which belongs in the response body, not the URL. I don't think there's a one-to-one mapping between SOAP calls and URLs as the article suggests. -Eric
"Eric J. Bowman" <eric@...> writes: > Since I've never pursued web services / SOAP I'm never sure how useful it > is to try to explain REST in SOA terms. But I have to disagree with the > example URL given in the article. > > http://humanresources.com/benefits?user=<USER_SSID> > > If it had stopped there it would make sense. But then "type" is added to > the end and REST goes out the window, because if user 123-45-6789 becomes > a part-time employee and later retires then his URL will change twice. > The status of the employee is exactly the sort of thing which belongs in > the response body, not the URL. I don't think it is that clear-cut. The doc does not say that parameters are used for intersection filtering instead of union. That is, benefits?user=alice&type=full_time_employee, could mean either: 1. The intersection of user-specific and a group's benefits. If Alice is actually not a full time employee, but a part of the executive group (in many companies, executives are in a different group than other full-time employees), and she wants to know what benefits are common between her executive group membership and other lesser employees, she could use that query too. This makes sense because a system can always derive a user's employment type and one can argue that putting type into the query is not because the system cannot do the derivation itself, but because you want to prevent the derivation (supply the value yourself). 2. The union of user-specific and a group's benefits. The other way to interpret the query is: a user parameter shows special benefit for the user, and the type parameter shows the group benefits. If Alice is a full time employee with a special benefit of having $1K stock option, then to see her total benefits, she would have to specify both user and type params. This is a case where the system does not attempt to do derivation; does not try to be smart, which is a feature I wish some softwares do not do. So, if anything, it's the article being not clear on the purpose of the parameters. YS.
John Panzer wrote: > Julian Reschke wrote: > >> John Panzer wrote: >> >> >>> Alan Dean wrote: >>> >>>> ... >>>> >>>> PUT /robots.txt >>>> >>>> ... replaces the whole file >>>> >>>> >>> How about >>> >>> PUT /robots.txt >>> Content-Range: bytes=50-80/ 500 >>> ... >>> >>> ? >>> >> > ... >> >> This has been discussed often enough on the WebDAV mailing list. >> >> The main problem is that it's hard to deploy, because many deployed >> servers ignore "Content-Range" upon PUT, so the request would damage the >> content. >> > > I had thought that Accept-Ranges: bytes would address this, but upon > closer reading realized that it's advertising support for GET ranges only. > > So assume for the moment that a server needs to advertise its support > for this extension some way (as it would for PATCH + some specific delta > format); say, Accept-Put-Ranges: . Are there other problems? > > Haven't seen anything further on this, and it's a serious question. Anyone?
Julian Reschke wrote: > John Panzer wrote: > >> Alan Dean wrote: >> > ... >> > >> > PUT /robots.txt >> > >> > ... replaces the whole file >> > >> How about >> >> PUT /robots.txt >> Content-Range: bytes=50-80/ 500 >> ... >> >> ? >> > > ... > > This has been discussed often enough on the WebDAV mailing list. > > The main problem is that it's hard to deploy, because many deployed > servers ignore "Content-Range" upon PUT, so the request would damage the > content. > > The solution to this, again, is PATCH with a patch format that allows > these kinds of modifications. > > (FWIW, I'm following James Snell's revival of the PATCH draft RFC with great interest.) However, I'm a bit unclear on the problem. PATCH, or any other solution, also needs to be deployed, and servers need to be able to advertise the capability -- e.g., via Allow: -- for them to be really useful. All of this is an optimization anyway; you don't want clients attempting PATCH on every server and giving up if it bounces off. So I'm not sure why it's any harder to deploy than any other solution. Any WebDAV mailing list archives available for perusal? -John
John Panzer wrote: > ... > However, I'm a bit unclear on the problem. PATCH, or any other > solution, also needs to be deployed, and servers need to be able to > advertise the capability -- e.g., via Allow: -- for them to be really > useful. All of this is an optimization anyway; you don't want clients > attempting PATCH on every server and giving up if it bounces off. So > I'm not sure why it's any harder to deploy than any other solution. Any John, the deployment issue is that there are servers out there that *ignore* Range headers in requests. When they do, a client that tried just to send a range will have replaced the whole resource -> data loss. > WebDAV mailing list archives available for perusal? > ... <http://lists.w3.org/Archives/Public/w3c-dist-auth/2002JanMar/0144.html> <http://www.google.de/search?q=site%3Alists.w3.org%2FArchives%2FPublic%2Fw3c-dist-auth+put+range> Best regards, Julian
Julian Reschke wrote: > John Panzer wrote: > > ... > >> However, I'm a bit unclear on the problem. PATCH, or any other >> solution, also needs to be deployed, and servers need to be able to >> advertise the capability -- e.g., via Allow: -- for them to be really >> useful. All of this is an optimization anyway; you don't want clients >> attempting PATCH on every server and giving up if it bounces off. So >> I'm not sure why it's any harder to deploy than any other solution. Any > > > John, > > the deployment issue is that there are servers out there that *ignore* > Range headers in requests. When they do, a client that tried just to > send a range will have replaced the whole resource -> data loss. Thanks for the links. I understand the problem you're referring to but it doesn't apply to my question.. The links you reference were in the context of an HTTP working group dealing with generic HTTP servers, a lack of implementation experience, lack of people reviewing the draft... I think the situation here is quite different. My questions are rather what extensions we should recommend for servers who wish to support them. And how PATCH compares to using Content-Range:. I don't see a huge difference between PATCH and Content-Range: in this respect, because servers are going to need to advertise either one before a client will attempt them. There was also an assumption in the W3 WG that they were talking about byte range updates, and I think that what we really are looking at is infoset level deltas. Which I think changes the discussion. > > > WebDAV mailing list archives available for perusal? > > ... > > <http://lists.w3.org/Archives/Public/w3c-dist-auth/2002JanMar/0144.html> > <http://www.google.de/search?q=site%3Alists.w3.org%2FArchives%2FPublic%2Fw3c-dist-auth+put+range> > > > Best regards, Julian > >
John Panzer wrote: > > John, > > > > the deployment issue is that there are servers out there that *ignore* > > Range headers in requests. When they do, a client that tried just to > > send a range will have replaced the whole resource -> data loss. > > Thanks for the links. I understand the problem you're referring to but > it doesn't apply to my question.. The links you reference were in the > context of an HTTP working group dealing with generic HTTP servers, a > lack of implementation experience, lack of people reviewing the draft... > I think the situation here is quite different. > > My questions are rather what extensions we should recommend for servers > who wish to support them. And how PATCH compares to using > Content-Range: . I don't see a huge difference between PATCH and > Content-Range: in this respect, because servers are going to need to > advertise either one before a client will attempt them. That's right. But unless I'm missing something we'll need a new discovery mechanism in both cases, right? > There was also an assumption in the W3 WG that they were talking about > byte range updates, and I think that what we really are looking at is > infoset level deltas. Which I think changes the discussion. Yes. Put/Content-Range *could* be used for byte range updates with a reliable discovery mechanism. But that's not sufficient. > ... Best regards, Julian
Julian Reschke wrote: > John Panzer wrote: > >> > John, >> > >> > the deployment issue is that there are servers out there that *ignore* >> > Range headers in requests. When they do, a client that tried just to >> > send a range will have replaced the whole resource -> data loss. >> >> Thanks for the links. I understand the problem you're referring to but >> it doesn't apply to my question.. The links you reference were in the >> context of an HTTP working group dealing with generic HTTP servers, a >> lack of implementation experience, lack of people reviewing the draft... >> I think the situation here is quite different. >> >> My questions are rather what extensions we should recommend for servers >> who wish to support them. And how PATCH compares to using >> Content-Range: . I don't see a huge difference between PATCH and >> Content-Range: in this respect, because servers are going to need to >> advertise either one before a client will attempt them. >> > > That's right. But unless I'm missing something we'll need a new > discovery mechanism in both cases, right? > Right. And discovery needs to be efficient since the main point of patching is efficiency.
* John Panzer <jpanzer@...> [2007-06-28 18:25]: > There was also an assumption in the W3 WG that they were > talking about byte range updates, and I think that what we > really are looking at is infoset level deltas. Which I think > changes the discussion. Exactly. The applicability of PUT with Content-Range is quite limited: you can only modify a single contiguous region of the resource per request, and it requires that the server support octet-level addressing on the underlying resource in the first place. Supporting Content-Range on GET is easy, even if the data is plucked from a database and then rendered through a template; supporting Content-Range on PUT in this scenario is… very tricky, to say the least. And of course, there’s the gnarly problem of Content-Range getting dropped in some scenarios. PATCH, OTOH, has none of these issues. You can modify as many parts of the resource at once as you want, expressing modifications at an arbitrary level of abstraction, and failure in transit is more reliable. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote: > * John Panzer <jpanzer@...> [2007-06-28 18:25]: > >> There was also an assumption in the W3 WG that they were >> talking about byte range updates, and I think that what we >> really are looking at is infoset level deltas. Which I think >> changes the discussion. >> > > Exactly. The applicability of PUT with Content-Range is quite > limited: you can only modify a single contiguous region of the > resource per request, and it requires that the server support > octet-level addressing on the underlying resource in the first > place. Supporting Content-Range on GET is easy, even if the data > is plucked from a database and then rendered through a template; > supporting Content-Range on PUT in this scenario is… very tricky, > to say the least. > > And of course, there’s the gnarly problem of Content-Range > getting dropped in some scenarios. > > PATCH, OTOH, has none of these issues. You can modify as many > parts of the resource at once as you want, expressing > modifications at an arbitrary level of abstraction, and failure > in transit is more reliable. > > Both Content-Range and PATCH need to define infoset-level extensions. Content-Range might possibly be dropped silently by very broken intermediaries, but PATCH is known to be blocked by some proxies in certain configurations (e.g., Squid). I don't see a clear winner. > Regards, >
On 6/28/07, John Panzer <jpanzer@...> wrote: > > Right. And discovery needs to be efficient since the main point of patching is > efficiency. In the Atom case, there is already a good mechanism to bootstrap this. You have introspection doc that lists where to find links to start, so you might as well add something that says PATCH is supported too. You could scope it hierarchically, like HTTP authentication, if you want to limit the url space where PATCH can be assumed to work. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
On 6/28/07, John Panzer <jpanzer@...> wrote: > > Both Content-Range and PATCH need to define infoset-level extensions. > Content-Range might possibly be dropped silently by very broken intermediaries, > but PATCH is known to be blocked by some proxies in certain configurations (e.g., > Squid). I don't see a clear winner. That's inaccurate. First, Content-Range is known to be ignored by *origin servers*. Second, I wonder how many proxies block PATCH, but not PUT. I know there are several that explicitly block PROPFIND, etc. because Microsoft security bugs. There are also several that block everything except for GET, POST, and HEAD. If you really want to tunnel around obnoxious proxies, just add an X-No-Really: header to POST requests. This would work on TMobile USA, for instance, but PUT with Content-Range would not. Or use HTTPS, which is impervious to proxies. There are several choices that don't require broken standards. Transparency is a good thing. There is a trade-off, in that cheap and obvious request routing is more prone to proxy meddling on insecure channels. Who desperately needs to deploy insecure PUT requests on a massive scale? -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Robert Sayre wrote: > On 6/28/07, John Panzer <jpanzer@...> wrote: > >> Both Content-Range and PATCH need to define infoset-level extensions. >>Content-Range might possibly be dropped silently by very broken intermediaries, >>but PATCH is known to be blocked by some proxies in certain configurations (e.g., >>Squid). I don't see a clear winner. > > > That's inaccurate. First, Content-Range is known to be ignored by > *origin servers*. Again, not relevant to a discussion about new extensions to be implemented by servers on an opt-in basis with a discovery mechanism. > Second, I wonder how many proxies block PATCH, but > not PUT. I know there are several that explicitly block PROPFIND, etc. Any proxy that blocks PUT also blocks AtomPub of course (my main use case). > because Microsoft security bugs. There are also several that block > everything except for GET, POST, and HEAD. If you really want to > tunnel around obnoxious proxies, just add an X-No-Really: header to > POST requests. This would work on TMobile USA, for instance, but PUT > with Content-Range would not. Or use HTTPS, which is impervious to > proxies. I suspect all of these solutions will likely be using HTTPS anyway. > > There are several choices that don't require broken standards. > Transparency is a good thing. There is a trade-off, in that cheap and > obvious request routing is more prone to proxy meddling on insecure > channels. Who desperately needs to deploy insecure PUT requests on a > massive scale? >
On 6/29/07, John Panzer <jpanzeracm@...> wrote: > > Again, not relevant to a discussion about new extensions to be > implemented by servers on an opt-in basis with a discovery mechanism. Totally relevant. Existing servers, which inaccurately think they implement PUT correctly, will not pay attention to the proposed "discovery mechanism", or the new Content-Range specifier, and overwrite entire files with partial data. Sucks for sure, I agree. > > I suspect all of these solutions will likely be using HTTPS anyway. You wrote "but PATCH is known to be blocked by some proxies in certain configurations (e.g., Squid). " Squid can't do a thing about https traffic. A double-edged sword, for sure. But if everyone is using HTTPS, it's irrelevant. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Robert Sayre wrote: > On 6/29/07, John Panzer <jpanzeracm@...> wrote: >> >> Again, not relevant to a discussion about new extensions to be >> implemented by servers on an opt-in basis with a discovery mechanism. > > Totally relevant. Existing servers, which inaccurately think they > implement PUT correctly, will not pay attention to the proposed > "discovery mechanism", or the new Content-Range specifier, and > overwrite entire files with partial data. Sucks for sure, I agree. By discovery mechanism, I'm imagining that a server would respond to an original GET (which you have to do anyway to get the original bits) with some additional headers such as Put-Ranges-Mime-Types-Accepted: text/xml. A server which is returning this sort of metadata yet is ignoring Content-Range: is simply being perverse. > >> >> I suspect all of these solutions will likely be using HTTPS anyway. > > You wrote "but PATCH is known to be blocked by some proxies in certain > configurations (e.g., Squid). " > > Squid can't do a thing about https traffic. A double-edged sword, for > sure. But if everyone is using HTTPS, it's irrelevant. > Yup. It's more of a theoretical concern (well, Netscalers can terminate https connections and proxy but that's about the only one I know of).
John Panzer wrote: > Both Content-Range and PATCH need to define infoset-level extensions. > Content-Range might possibly be dropped silently by very broken > intermediaries, but PATCH is known to be blocked by some proxies in > certain configurations (e.g., Squid). I don't see a clear winner. When a proxy blocks PATCH, it's either broken, or it's doing that on purpose because it was told to. In the first case, it needs to be patched (pun), in the second case it's a feature. Best regards, Julian
* John Panzer <jpanzer@...> [2007-06-29 05:20]: > Both Content-Range and PATCH need to define infoset-level > extensions. How is that supposed to work for PUT+Content-Range? That’s a horrible strain of semantics if you ask me. > Content-Range might possibly be dropped silently by very broken > intermediaries, but PATCH is known to be blocked by some > proxies in certain configurations (e.g., Squid). That’s exactly the point: when it fails, it fails loudly and without touching anything. When PUT+Content-Range fails, it appears to succeed and destroys data. That alone is a knockout issue, if you ask me. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote:
> * John Panzer <jpanzer@...> [2007-06-29 05:20]:
>
>>Both Content-Range and PATCH need to define infoset-level
>>extensions.
>
>
> How is that supposed to work for PUT+Content-Range? That’s a
> horrible strain of semantics if you ask me.
That's the kind of reaction I was looking for :). It could work like:
Content-Range: xpath:(xpath expression selecting a subtree or subset of
document)
In other words, instead of selecting a range of bytes 50-899, you can
select entry/author/* only.
>
>
>>Content-Range might possibly be dropped silently by very broken
>>intermediaries, but PATCH is known to be blocked by some
>>proxies in certain configurations (e.g., Squid).
>
>
> That’s exactly the point: when it fails, it fails loudly and
> without touching anything. When PUT+Content-Range fails, it
> appears to succeed and destroys data.
I guess this seems a bit odd to me. If a server that advertises, for
example, Accept-Put-Ranges: xpath in its headers then proceeds to ignore
Content-Range:, what's to stop it from treating a PATCH like a PUT?
Both seem equally broken behavior to me.
switch (method) {
case PATCH: /* TODO: Add real PATCH support */
case PUT:
...
}
I am worried about intermediaries though, including things like
environments (Flash?) and client libraries. I'm worried on behalf of
both methods.
-John
Julian Reschke wrote: > John Panzer wrote: > >>Both Content-Range and PATCH need to define infoset-level extensions. >>Content-Range might possibly be dropped silently by very broken >>intermediaries, but PATCH is known to be blocked by some proxies in >>certain configurations (e.g., Squid). I don't see a clear winner. > > > When a proxy blocks PATCH, it's either broken, or it's doing that on > purpose because it was told to. Actually in the case of Squid, it's because the sysadmin never heard of PATCH and she's forced to enter the methods to allow explicitly: GET, PUT, DELETE, HEAD, POST. (No wildcards.) At least's my understanding. It is amusing though that https tunnels or bypasses all of this security theater anyway.
On 6/29/07, John Panzer <jpanzer@...> wrote: > > A. Pagaltzis wrote: > > * John Panzer <jpanzer@...> [2007-06-29 05:20]: > > > >>Both Content-Range and PATCH need to define infoset-level > >>extensions. > > > > > > How is that supposed to work for PUT+Content-Range? That's a > > horrible strain of semantics if you ask me. > > That's the kind of reaction I was looking for :). It could work like: > > Content-Range: xpath:(xpath expression selecting a subtree or subset of > document) > > In other words, instead of selecting a range of bytes 50-899, you can > select entry/author/* only. > The problem with this idea is that Content-Range specifies that the value is a byte range, not an xpath expression. http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.16 Take a step back and you don't need to handle this via headers, just support the xpath expression within the URI, probably in the fragment, e.g. PUT /foo.xml#//root/child[@attribute='value'] no black magic or obscure header usage required. Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
* John Panzer <jpanzeracm@...> [2007-06-29 16:15]: > A. Pagaltzis wrote: > >* John Panzer <jpanzer@...> [2007-06-29 05:20]: > >>Both Content-Range and PATCH need to define infoset-level > >>extensions. > > > >How is that supposed to work for PUT+Content-Range? That’s a > >horrible strain of semantics if you ask me. > > That's the kind of reaction I was looking for :). It could > work like: > > Content-Range: xpath:(xpath expression selecting a subtree or > subset of document) > > In other words, instead of selecting a range of bytes 50-899, > you can select entry/author/* only. Aha. Well, you can still change only a single region of the document per request that way. Or if you use an XPath that selects multiple nodes, you can only replace all of the nodes with the same value. > >>Content-Range might possibly be dropped silently by very > >>broken intermediaries, but PATCH is known to be blocked by > >>some proxies in certain configurations (e.g., Squid). > > > >That’s exactly the point: when it fails, it fails loudly and > >without touching anything. When PUT+Content-Range fails, it > >appears to succeed and destroys data. > > I guess this seems a bit odd to me. If a server that > advertises, for example, Accept-Put-Ranges: xpath in its > headers then proceeds to ignore Content-Range:, what's to stop > it from treating a PATCH like a PUT? Both seem equally broken > behavior to me. That wasn’t what I meant. If the origin advertises such support but fails to implement it correctly, obviously no interesting argument about the protocol can be drawn from that. > I am worried about intermediaries though, including things like > environments (Flash?) and client libraries. I'm worried on > behalf of both methods. *That* is what I meant. PATCH will either be supported correctly or be rejected by intermediaries. PUT+Content-Range might instead be supported insufficiently well to work but well enough to lead to data loss. It could be slightly harder to deploy PATCH because a minority of intermediaries might reject it while accepting PUT and (probably unintentionally, by not meddling with the request) correctly supporting Content-Range on the latter. But I favour choices that reduce the chance of a system left in an inconsistent state, and between PATCH and PUT+Content-Range, both going across intermediaries, that choice is clearly PATCH. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> >Take a step back and you don't need to handle this via headers, just >support the xpath expression within the URI, probably in the fragment, >e.g. > >PUT /foo.xml#//root/child[@attribute='value'] > >no black magic or obscure header usage required. > Only a client that sends fragments as part of the request... ;-) We generate XPointer URLs on the server, for client consumption as XPointer is implemented using XPath in the fragment. But on the server side you need to make an "XPointer query" URL: PUT /foo.xml?xpointer=//root/child/text() -Eric
Eric J. Bowman wrote: >> Take a step back and you don't need to handle this via headers, just >> support the xpath expression within the URI, probably in the fragment, >> e.g. >> >> PUT /foo.xml#//root/child[@attribute='value'] >> >> no black magic or obscure header usage required. >> >> > > Only a client that sends fragments as part of the request... ;-) > > We generate XPointer URLs on the server, for client consumption as > XPointer is implemented using XPath in the fragment. But on the > server side you need to make an "XPointer query" URL: > > PUT /foo.xml?xpointer=//root/child/text() > How would you handle ETags and If-Match:? (Would the ETag be for that specific subset, or for the base resource? Etc.)
Alan Dean wrote:
> On 6/29/07, John Panzer <jpanzer@...> wrote:
>
>> A. Pagaltzis wrote:
>>
>>> * John Panzer <jpanzer@...> [2007-06-29 05:20]:
>>>
>>>
>>>> Both Content-Range and PATCH need to define infoset-level
>>>> extensions.
>>>>
>>> How is that supposed to work for PUT+Content-Range? That's a
>>> horrible strain of semantics if you ask me.
>>>
>> That's the kind of reaction I was looking for :). It could work like:
>>
>> Content-Range: xpath:(xpath expression selecting a subtree or subset of
>> document)
>>
>> In other words, instead of selecting a range of bytes 50-899, you can
>> select entry/author/* only.
>>
>>
>
> The problem with this idea is that Content-Range specifies that the
> value is a byte range, not an xpath expression.
>
> http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.16
>
But that references this, which explicitly allows for extensions. (I'm
thinking this is mostly academic, as the other objections are making
PATCH win by a neck anyway...)
3.12 Range Units
HTTP/1.1 allows a client to request that only part (a range of) the
response entity be included within the response. HTTP/1.1 uses range
units in the Range (section 14.35
<http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35>) and
Content-Range (section 14.16
<http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.16>)
header fields. An entity can be broken down into subranges according to
various structural units.
range-unit = bytes-unit | other-range-unit
bytes-unit = "bytes"
other-range-unit = token
The only range unit defined by HTTP/1.1 is "bytes". HTTP/1.1
implementations MAY ignore ranges specified using other units.
HTTP/1.1 has been designed to allow implementations of applications that
do not depend on knowledge of ranges.
> >>But on the server side you need to make an "XPointer query" URL: >>PUT /foo.xml?xpointer=//root/child/text() > >How would you handle ETags and If-Match:? (Would the ETag be for that >specific subset, or for the base resource? Etc.) > The idea is to separate out the contents to be updated, into its own resource with its own URL. So the ETag would be for the new resource, not the base resource. A conditional PUT would do nicely. From the client perspective, the semantics are "full replacement" and the client doesn't need to care that the new nodeset gets merged into another resource. Repeating the server-side XPointer GET reflects the changes and assigns a new ETag. But this is what I really want to do... [1] http://en.wikibooks.org/wiki/Understanding_darcs:Patch_theory DARCS' strength over other RCS solutions are its powerful merge and undo capabilities. I keep thinking along these lines, but applying DARCS patch theory to XML trees seems a bit daunting. I'd like to PATCH using Content- Type: application/darcs+xml if only it existed... -Eric
Eric J. Bowman wrote: >>>But on the server side you need to make an "XPointer query" URL: >>>PUT /foo.xml?xpointer=//root/child/text() >> >>How would you handle ETags and If-Match:? (Would the ETag be for that >>specific subset, or for the base resource? Etc.) >> > > > The idea is to separate out the contents to be updated, into its own > resource with its own URL. So the ETag would be for the new resource, not > the base resource. A conditional PUT would do nicely. From the client > perspective, the semantics are "full replacement" and the client doesn't > need to care that the new nodeset gets merged into another resource. > Repeating the server-side XPointer GET reflects the changes and assigns a > new ETag. But in practice I think many clients are likely to have a full representation from the base resource (locally cached with its etag) and want to update just one field. If they simply want to blast the value in and overwrite any other updates for that field, they're fine. If they want to say "update this one field, but only if the overall resource hasn't changed since I last retrieved it, because I want to maintain local consistency within the resource"... there's nothing they can do, really, except PUT the entire resource with its etag. This is one of the attractive propositions of PATCH: Presumably the etag you use is the one for the entire resource, even though you're only updating a subset. This is useful. It's also one of the requirements for Web3S apparently (they allow ETags to apply to URL hierarchies to solve the issue). > But this is what I really want to do... > > [1] http://en.wikibooks.org/wiki/Understanding_darcs:Patch_theory > > DARCS' strength over other RCS solutions are its powerful merge and undo > capabilities. I keep thinking along these lines, but applying DARCS patch > theory to XML trees seems a bit daunting. I'd like to PATCH using Content- > Type: application/darcs+xml if only it existed... > > -Eric > > > > > > > Yahoo! Groups Links > > > > >
> >If they want to say "update this one field, but only if the overall >resource hasn't changed since I last retrieved it, because I want to >maintain local consistency within the resource"... there's nothing they >can do, really, except PUT the entire resource with its etag. > What does it matter if the source (there's no resource on the server, "resource" is an abstraction) has changed, if those changes don't affect the result of the XPointer query? ETag = MD5. If I want to change <dt>State Song</dt> to <dt>Song</dt> why do I care if someone has, in the meantime, appended <dd>Rocky Mountain High</dd> under <dt>State Song</dt> thereby changing the source? It doesn't change the ETag for the partial resource I'm interested in updating so it doesn't affect a conditional PUT request. > >This is one of the attractive propositions of PATCH: Presumably the >etag you use is the one for the entire resource, even though you're only >updating a subset. This is useful. It's also one of the requirements >for Web3S apparently (they allow ETags to apply to URL hierarchies to >solve the issue). > But the overriding purpose of PATCH is to save bytes over using PUT. If we can conceive a way to update XML trees without the PATCH message body taking more bytes than using an XPointer query as I've described then there is no reason to implement PATCH for XML updates. -Eric
I'm going to ding myself on terminology usage. > >So the ETag would be for the new resource, not the base resource. > Should have been, "So the ETag would be for a representation of the new resource, not derived from the source file." Because, of course, the new resource is also a representation of the base resource. -Eric
Eric J. Bowman wrote: >>If they want to say "update this one field, but only if the overall >>resource hasn't changed since I last retrieved it, because I want to >>maintain local consistency within the resource"... there's nothing they >>can do, really, except PUT the entire resource with its etag. >> > > > What does it matter if the source (there's no resource on the server, > "resource" is an abstraction) has changed, if those changes don't affect > the result of the XPointer query? Example: In an Atom entry, change status from "draft" to "published" iff the content has not changed from the version I just reviewed in my client. ETag = MD5. If I want to change > <dt>State Song</dt> to <dt>Song</dt> why do I care if someone has, in > the meantime, appended <dd>Rocky Mountain High</dd> under <dt>State > Song</dt> thereby changing the source? It doesn't change the ETag for > the partial resource I'm interested in updating so it doesn't affect > a conditional PUT request. > > >>This is one of the attractive propositions of PATCH: Presumably the >>etag you use is the one for the entire resource, even though you're only >>updating a subset. This is useful. It's also one of the requirements >>for Web3S apparently (they allow ETags to apply to URL hierarchies to >>solve the issue). >> > > > But the overriding purpose of PATCH is to save bytes over using PUT. If > we can conceive a way to update XML trees without the PATCH message body > taking more bytes than using an XPointer query as I've described then > there is no reason to implement PATCH for XML updates. For what is a man profited, if he shall gain all the bytes in the world, and lose his consistency?
On 6/29/07, John Panzer <jpanzer@...> wrote: > > But that references this, which explicitly allows for extensions. (I'm thinking this > is mostly academic, as the other objections are making PATCH win by a neck > anyway...) > Actually, I think this is wrong. The content-range-spec production does not reference the more general range-unit, but instead uses bytes-unit directly. > range-unit = bytes-unit | other-range-unit > bytes-unit = "bytes" > other-range-unit = token > > The only range unit defined by HTTP/1.1 is "bytes". > HTTP/1.1 implementations MAY ignore ranges specified using other units. > > HTTP/1.1 has been designed to allow implementations of applications that do > not depend on knowledge of ranges. These words from the spec seem to be written with GET in mind, I'm not sure they make sense for PUT. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
>> >> What does it matter if the source (there's no resource on the server, >> "resource" is an abstraction) has changed, if those changes don't affect >> the result of the XPointer query? > >Example: In an Atom entry, change status from "draft" to "published" >iff the content has not changed from the version I just reviewed in my >client. > If that's a constraint of your implementation, then I would recommend against exposing just that status element as its own sub-resource. However, my implementation may not impose that constraint, in which case assigning an URL to the sub-resource and manipulating that works just fine. >> >> But the overriding purpose of PATCH is to save bytes over using PUT. If >> we can conceive a way to update XML trees without the PATCH message body >> taking more bytes than using an XPointer query as I've described then >> there is no reason to implement PATCH for XML updates. > >For what is a man profited, if he shall gain all the bytes in the world, >and lose his consistency? > You got me there. Why not ask Roy? > >PATCH has very specific semantics and a very specific >goal of reducing bits on updates. It is a separate method because it >needs access to the same (generic) conditional mechanisms as PUT and >because POST (when applied to an authorable resource) means append. > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/7345 Meaning, I think, that if I POST <dd>Rocky Mountain High</dd> multiple times it must be appended multiple times. Currently, to avoid the duplicate entry, I return 400 Bad Request if the entry already exists. So I've pretty well borked POST into not meaning append, except the first time, because I'm forcing merge semantics onto POST. More importantly: > >The generality refers to all resources having the same >interface, not all resources having an artificially limited interface. >It isn't even necessary for all resources to support the same set of >methods -- only that, when supported, they mean the same thing to all >resources. > Since I have some resources which treat POST as append, and others which treat POST as merge, I've violated the principle of generality. As a result, my messages are not self-descriptive, and breaking the Uniform Interface constraint results in something that is not REST. Which seems to suggest that PATCH is the proper method, unless of course the implementation ends up heavier than just doing a PUT. Consistency is up to the application developer. I profit by saving bytes, and maintaining consistent method semantics across all my resources. -Eric
Eric J. Bowman wrote: > ... > But the overriding purpose of PATCH is to save bytes over using PUT. If It may be the overriding one for you, but that's not necessarily the case for everybody. > we can conceive a way to update XML trees without the PATCH message body > taking more bytes than using an XPointer query as I've described then > there is no reason to implement PATCH for XML updates. > ... You may want to have a look at XCAP (<http://tools.ietf.org/html/rfc4825>). This has all kinds of issues with namespace prefix mapping, scope of ETags, and so on... Best regards, Julian
Paul Winkler wrote: > > > Reading the docs at http://activemq.apache.org/ > <http://activemq.apache.org/> I noticed they have a > REST API. So I clicked on http://activemq.apache.org/rest.html > <http://activemq.apache.org/rest.html> to > read more and, perhaps not surprisingly, found that it's pretty broken > REST. IBM just published their similar HTTP interface to WebSphere MQ. http://www-1.ibm.com/support/docview.wss?uid=swg24016142 See the PDF, especially chapter 5 and 8 for the details regarding their use of HTTP. Now, they don't claim to be RESTful, but they do seem to care about being good HTTP citizens, examples such as: "GET - Browses the first message on a queue. In line with the HTTP protocol this does not delete the message from the queue". However, there some obvious problems with the design (possibly due to simplifying clients), most of which has already been discussed in this thread: * DELETE to retrieve a message is done on the queue URL * DELETE to browse a message is done on the queue URL * No support for retries if a POST or DELETE fails * A custom header (x-msg-format) that overrides Content-Type, not sure if this is really an issue * No description of how caching is handled. Examples does not contain any cache headers at all so I'm guessing that it's not supported at all. What do you think if their design, judging from this thread this seems to be an tricky area? /niklas
* Julian Reschke <julian.reschke@...> [2007-06-30 09:30]: > Eric J. Bowman wrote: > > But the overriding purpose of PATCH is to save bytes over > > using PUT. If > > It may be the overriding one for you, but that's not > necessarily the case for everybody. I don’t even see how it would save bytes, actually. That wasn’t what I consider PATCH to be good for at all. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I will be out of the office starting 29.06.2007 and will not return until 23.07.2007. I will have occasional email access only.
On 6/22/07, Nick Gall <nick.gall@...> wrote: > On 6/21/07, Mark Baker <distobj@...> wrote: > > Does anybody disagree with "set the state of the targetted > > resource to that represented in the provided representation?" I > > suppose that question should really be asked on ietf-http-wg though. > > > > It doesn't help because it begs the question "set how much of the state of the targeted resource"? As much as is specified in the message, including whatever the media type specifies (if anything). > Both replacement semantics and merge semantics fit this description. I can't see how merge would fit that description. The whole "full"/"complete" vs "partial"/"incomplete" replacement bit is, as I think I said before, begging the question because the difference between them is described in terms of server behaviour (i.e. this is what the state of the resource would be after the message is sent). The definition I gave above is server behaviour independent, so IMO a superior definition. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Robert Sayre wrote:
> On 6/29/07, John Panzer <jpanzer@...> wrote:
>
>> But that references this, which explicitly allows for extensions. (I'm thinking this
>>is mostly academic, as the other objections are making PATCH win by a neck
>>anyway...)
>>
>
>
> Actually, I think this is wrong. The content-range-spec production
> does not reference the more general range-unit, but instead uses
> bytes-unit directly.
I looked more closely and I think there's a contradiction in RFC2616.
14.35.1 calls the production "ranges-specifier" and then gives only one
choice, "byte-ranges-specifier". I'm guessing this is junk DNA from
when somebody wanted to leave room for extensibility:
ranges-specifier = byte-ranges-specifier
byte-ranges-specifier = bytes-unit "=" byte-range-set
And then 14.35.2 talks about the Range: request header in the same terms
(bytes only). But then 3.12 contradicts 14.35, unless I'm misreading:
---
3.12 Range Units
HTTP/1.1 allows a client to request that only part (a range of) the
response entity be included within the response. HTTP/1.1 uses range
units in the Range (section 14.35) and Content-Range (section 14.16)
header fields. An entity can be broken down into subranges according to
various structural units.
range-unit = bytes-unit | other-range-unit
bytes-unit = "bytes"
other-range-unit = token
The only range unit defined by HTTP/1.1 is "bytes". HTTP/1.1
implementations MAY ignore ranges specified using other units.
HTTP/1.1 has been designed to allow implementations of applications that
do not depend on knowledge of ranges.
---
Or am I misunderstanding this? Nowhere does 14.35 use the term
range-unit, but 14.5 (Accept-Ranges) does. But the English prose in
3.12 clearly implies that... grr. Spec bug? And if so, in what direction?
-John
My apologies. Did I mention that Lotus Notes sucks? Stefan On Jun 30, 2007, at 4:12 PM, stefan.tilkov@... wrote: > > I will be out of the office starting 29.06.2007 and will not return > until > 23.07.2007. > > I will have occasional email access only. > > >
> >>>* Julian Reschke <julian.reschke@...> [2007-06-30 09:30]: >>> Eric J. Bowman wrote: >>> But the overriding purpose of PATCH is to save bytes over >>> using PUT. If >> >> It may be the overriding one for you, but that's not >> necessarily the case for everybody. > >I don’t even see how it would save bytes, actually. That wasn’t >what I consider PATCH to be good for at all. > Quoting Dr. Fielding once more, if you don't like what I say about PATCH saving bytes, then take it up with him because I'm saying exactly the same thing: > >PATCH has very specific semantics and a very specific >goal of reducing bits on updates. > Yes, I realize I'm easier to argue with, but please understand that this isn't something I just made up. "...very specific goal of reducing bits on updates..." is very, very unambiguous. If your PATCH takes more bytes than it would to accomplish the same thing with PUT, then what is your reason for using PATCH? -Eric
* Eric J. Bowman <eric@...> [2007-06-30 22:30]:
> Yes, I realize I'm easier to argue with, but please understand
> that this isn't something I just made up.
Why are you assuming I’d take his word for gospel?
> If your PATCH takes more bytes than it would to accomplish the
> same thing with PUT, then what is your reason for using PATCH?
If it’s a full-monty PUT:
A1. No need for ETag tracking and retry logic on the client in
a number of situations.
A2. Simpler *and* more expressive ETag semantics in cases where
they *are* needed.
If we’re talking about some kind of range-limited PUT (whether
by Content-Range or some URI-based range addressing mechanism):
B1. A single roundtrip regardless of how many aspects of the
resource need changing.
B2. Corollary to #B1 (and partially #A1): no need to invent
a transaction mechanism when consistent resource state is
a requirement.
And if we’re talking specifically about PUT+Content-Range:
B3. More consistent failure modes with regard to intermediaries.
All of these apply regardless of whether PATCH is saving on bytes
or not.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
[ Attachment content not displayed ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Nick" == Nick Gall <nick.gall@...> writes:
Nick> 100% unambiguous?! This is what I'm just not seeing. Where
Nick> in section 9.6does it unambiguously say "replace" or
Nick> "omitted parts of the entity are to be removed"? All I see
Nick> are:
If you get the whole context:
The PUT method requests that the enclosed entity be stored under the
supplied Request-URI. If the Request-URI refers to an already
existing resource, the enclosed entity SHOULD be considered as a
modified version of the one residing on the origin server.
The first sentence is very unambiguous. Modified simply means
replaced, just read the sentence carefully. The submitted entity is
stored. The submitted entity is considered to be a modified version.
It does not say, how you read it, modify the entity with the
submission.
- --
Cheers,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGiIvJIyuuaiRyjTYRAmBxAJ95f1MkmPn4kR15sLR7G0uTpKd59wCff6eI
zSIoID5miWV6vNVTlTitgEs=
=TQEf
-----END PGP SIGNATURE-----
[ Attachment content not displayed ]
My funky two-stage DELETE idea has a "lost update problem" due to its non-idempotence, the fix is a conditional DELETE. The server is set up to respond 410 Gone when a GET request maps to a zero-byte file on the server. Deleting the file from the filesystem results in a 404 Not Found, by default. So the first stage DELETE changes the source-file size to zero bytes, which sets a 410 Gone. What I want to do is have a second DELETE on the same URL erase the file, thus changing the response code to 404. But if two users DELETE the same resource, the inadvertent result (which neither user intends) is the 404 response on a subsequent GET. My DELETE implementation is non-idempotent because the side-effects of a second DELETE request identical to the first DELETE request are different. Even though each DELETE is idempotent by itself, the sequence is not. The conditional DELETE is an if-match request. A 412 Precondition Failed response is used, if no match. A 409 Conflict response is used if the filesize is zero bytes, with a message body "Conditional DELETE request detected." If the filesize is greater than zero bytes and the DELETE is unconditional, the response is 409 Conflict with a message body "Unconditional DELETE request detected." The existing 200 OK response and message body are returned if the conditional DELETE succeeds. Conditional DELETE requests using anything but if-match result in a 400 Bad Request response. The second DELETE is unconditional, which is not identical to the conditional DELETE request in terms of idempotence because the headers are different. The response is 204 No Content after the source file is deleted from disk, subsequent DELETE requests respond 404. Now, the methods and sequences are idempotent, and the lost-update problem is fixed. -Eric
On 7/2/07, Eric J. Bowman <eric@...> wrote: > What I want to do is have a second DELETE on the same URL erase the > file, thus changing the response code to 404. But if two users DELETE > the same resource, the inadvertent result (which neither user intends) > is the 404 response on a subsequent GET. My DELETE implementation is > non-idempotent because the side-effects of a second DELETE request > identical to the first DELETE request are different. Even though each > DELETE is idempotent by itself, the sequence is not. Or the first "DELETE" could be a PUT.
On 7/2/07, Eric J. Bowman <eric@...> wrote: > My funky two-stage DELETE idea has a "lost update problem" due to its > non-idempotence, the fix is a conditional DELETE. I would suggest that the "fix" is to make your DELETE idempotent. http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.2 """Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. """ -joe -- Joe Gregorio http://bitworking.org
[ Attachment content not displayed ]
I'm new to this list, coming here after reading and thoroughly enjoying RESTful Web Services. One question the book raised for me was that the authors often sent a 200 status with no response body, say for a DELETE action. This is even the recommendation in their best practices tips. An appendix in the book does describe the 204 status and that's what I've used for such things in the past, but this didn't come up when they were describing response codes for typical flow control. Are there any good rules of thumb about when to use the 204 status? Is a response to a DELETE action not an appropriate time to use it? Thanks for any insight you can provide. James Edward Gray II
I'm trying to think more and more in terms of exposing RESTful
resources in my applications. One point that has come up for me is
the need to handle your typical email validation scheme, where you
email the user and have them click a link.
My gut instinct was to add an action that doesn't conform to the
uniform access interface, so this seems like the perfect time to add a
new resource. I feel my validation resource just needs one action and
it seems to me that PUT would be the correct way to create it, since
the client has all of the data for the request.
From a Rails application, I'm thinking I should send out a link to the
resource as:
http://whatever.com/validations/{user-hash}?_method=PUT
I'm just looking for some critique of this idea. Am I one the right
track here?
Thanks for the advice.
James Edward Gray II
"James Gray II" <lists@...> writes: > One question the book raised for me was that the authors often sent a > 200 status with no response body, say for a DELETE action. This is > even the recommendation in their best practices tips. http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.7 covers this. "A successful response SHOULD be 200 (OK) if the response includes an entity describing the status". > Are there any good rules of thumb about when to use the 204 status? "204 (No Content) if the action has been enacted but the response does not include an entity." I am not sure why they are recommending a 200 status for no response body. I'd like to think that it was pragmatism, on the chance that many implementations check only for 200 status code. But I somehow doubt that because it is much easier to check for 2xx status code. YS
[ Attachment content not displayed ]
On 6/27/07, James Gray II <lists@...> wrote:
> http://whatever.com/validations/{user-hash}?_method=PUT
That'd be an overloaded GET?
On 6/27/07, James Gray II <lists@...> wrote: > Are there any good rules of thumb about when to use the 204 status? > Is a response to a DELETE action not an appropriate time to use it? RFC 2616 covers both these questions. See: http://greenbytes.de/tech/webdav/rfc2616.html#status.204 http://greenbytes.de/tech/webdav/rfc2616.html#DELETE -Tim
On 7/2/07, Gustavo Morozowski <morovaster@...> wrote: > > Hi all, > > Any comments on project zero (http://www.projectzero.org/)? Joe's posted a blog entry here (http://bitworking.org/news/210/Project-Zero ). > IBM have something like 40% market share of the enterprise SOA industry; WebSphere "-1" has embraced WS-* like there's no tomorrow. Project 0 may be a clean start for IBM, but I dont see them abandoning the dark side. [disclaimer, I work for a competitor]. Putting that aside, two interesting things * its not open source, just open dev. I'm somewhat reminded of MS's channel-9 and fairly open access to the dev team; I wonder if marketing will be involved in project 0 to make all thinking public. Like the discussions on pricing, whether they view Ruby/Rails as a competitor, etc. * use of groovy and PHP as scripting frameworks. View it as an experiment to gauge consumer reaction. If you like it, they get to convince management to go to product release and price things high enough to make the DB2 license you will also need seem like a bargain. If it doesnt get takeup, then it can die without IBM having to publicly announce "WS bad, REST good", "Websphere -1 bad, Project 0 good". It could be the basis for a websphere successor, or its an attempt to divert engineering effort from OSS competitors, and from anything that the JCP process is tring to come up with. Everyone who writes an app on project 0 is someone who isnt providing patches and docs to Rails, to Phobos, to Cocoon. That may appeal to some people, but long term it isnt necessarily in their interest. -steve
"James Gray II" <lists@...> writes:
> My gut instinct was to add an action that doesn't conform to the
> uniform access interface, so this seems like the perfect time to add a
> new resource. I feel my validation resource just needs one action and
> it seems to me that PUT would be the correct way to create it, since
> the client has all of the data for the request.
Just do a GET on a validator resource.
GET http://whatever.com/congratulation-for-verifying-email/{user-id}
which returns a 200 followed by some blurp confirming that the email
address has been validated
-or-
a 403 (Forbidden) followed by some blurp indicating that the user-id
does not identify a valid user.
YS.
On 7/2/07, Yohanes Santoso <yahoo-rest-discuss@...> wrote:
> GET http://whatever.com/congratulation-for-verifying-email/{user-id}
>
> which returns a 200 followed by some blurp confirming that the email
> address has been validated
>
> -or-
>
> a 403 (Forbidden) followed by some blurp indicating that the user-id
> does not identify a valid user.
Overloading the GET can be avoided by having a registration form where
the user enters the email address only (and a captcha, in the case of
the one I saw), and an URL pointing to a single-use form was sent in
the email. Going to the URL doesn't validate the address (i.e. no side
effects to the GET), but that's the only way to get the form and thus
the URL to which the registration gets POSTed to.
That said, I admit to sort-of overloading the GET in my registration,
because it's Lazy. It's awfully hard to get Lazy Registration and REST
to cooperate (at least, when the client is a human-operated vanilla
browser, and you have to involve email).
On 2-Jul-07, at 8:49 AM, Eric J. Bowman wrote: > ... Even though each > DELETE is idempotent by itself, I don't think that's what you meant to say :-) > the sequence is not. >
Karen <karen.cravens@...> writes:
> On 7/2/07, Yohanes Santoso <yahoo-rest-discuss@...> wrote:
>> GET http://whatever.com/congratulation-for-verifying-email/{user-id}
>>
>> which returns a 200 followed by some blurp confirming that the email
>> address has been validated
>>
>> -or-
>>
>> a 403 (Forbidden) followed by some blurp indicating that the user-id
>> does not identify a valid user.
>
> Overloading the GET can be avoided by having a registration form
> where
I am a bit unclear as to your response. Are you saying the above is an
example of overloading GET? If so, what is the rationale seeing that
the above schema use GET in a safe and idempotent manner, bearing in
mind that safe does not mean 0 side-effect on the server.
YS.
On 7/2/07, Yohanes Santoso <yahoo-rest-discuss@...> wrote: > I am a bit unclear as to your response. Are you saying the above is an > example of overloading GET? If so, what is the rationale seeing that > the above schema use GET in a safe and idempotent manner, bearing in > mind that safe does not mean 0 side-effect on the server. I think it could be argued either way... since the validation is the sole *purpose* of the GET, it may be hard to justify it as a "side effect," at least from a philosophical perspective. From a technical perspective, maybe. It depends on where you draw the line between "side effect" and "modifying a resource."
> >> My funky two-stage DELETE idea has a "lost update problem" due to its >> non-idempotence, the fix is a conditional DELETE. > >I would suggest that the "fix" is to make your DELETE idempotent. > As I explained, the conditional DELETE does make the DELETE idempotent, in line with section 9.1.2 unless I've missed something. In what way is the modified procedure non-idempotent, Joe? -Eric
> >Or the first "DELETE" could be a PUT. > Hmmm, I don't think it would be right for a PUT to change the status of a resource from 200 to 410. Besides, on my system, it is not possible to PUT a zero-byte file because the server only accepts well-formed XML in PUT requests. -Eric
You shouldn't validate merely on a GET. You should have a form that
requires explicit confirmation by the user.
The reason is that I can submit someone else's email and the link in
the email might be followed by the curious victim, or even by software
that pre-fetches URLs. I then have a 'confirmed' email but would be
the only one with the credentials.
On 7/2/07, Yohanes Santoso <yahoo-rest-discuss@...> wrote:
> "James Gray II" <lists@...> writes:
>
> > My gut instinct was to add an action that doesn't conform to the
> > uniform access interface, so this seems like the perfect time to add a
> > new resource. I feel my validation resource just needs one action and
> > it seems to me that PUT would be the correct way to create it, since
> > the client has all of the data for the request.
>
> Just do a GET on a validator resource.
>
> GET http://whatever.com/congratulation-for-verifying-email/{user-id}
>
> which returns a 200 followed by some blurp confirming that the email
> address has been validated
>
> -or-
>
> a 403 (Forbidden) followed by some blurp indicating that the user-id
> does not identify a valid user.
>
> YS.
>
>
>
> Yahoo! Groups Links
>
>
>
>
On 7/2/07, Mike Dierken <dierken@...> wrote: > You shouldn't validate merely on a GET. You should have a form that > requires explicit confirmation by the user. > The reason is that I can submit someone else's email and the link in > the email might be followed by the curious victim, or even by software > that pre-fetches URLs. I then have a 'confirmed' email but would be > the only one with the credentials. Not if the credentials involve a password that was mailed only to the to-be-verified email address, but your other points stand. I still like the scheme I outlined above, but haven't figured out how to apply it to Lazy Registration. The best I've done is this: Say you have your standard bottom-of-the-blog-entry comment form, and your prospective member decides to comment. You don't want to ship them off to another page, least of all one you only give them the URL to in an email, or they're liable to decide it's all too much effort. The nice low-threshold form is right there, and it's one blog commenters are conditioned to already: they type their name, email, and maybe web address in the boxes, type their comment, hit the button... et voila, they've just Lazily Registered Now, in my case I send them their initial password, rather than a GET link, and if they now want to do anything that requires authen, they're going to get to enter it on a web form anyway, to be POSTed or PUT somewhere. It's maybe stretching the definition a little to, as part of the authen, go and mark them as "confirmed" (since it's not necessarily part of the resource they're creating/editing), but it's the best I've come up with, and it requires the minimum amount of intrusiveness.
One point of confusion here might be: why would a response to a DELETE ever include an entity? I can think of a couple of possible cases, but the most common is probably to return HTML content with a human-friendly message about the DELETE having succeeded, what the implications are, and containing links to related Resources. Another case is a system where DELETE is a logical delete, i.e. the resource is archived and not absolutely deleted, in which case there might be a representation of the resource to return, in its "archived" state. Yohanes' point about some clients mishandling non-200 status codes is a good one. I've worked with a few different clients that treated 201 or 204 as an exception. Obviously those were non-conforming clients, but they still had to be dealt with. I deal with clients like that by checking if they sent a Request Header called X-crippled-client with a boolean true value; if so, I always send back a 200 OK, regardless of what the correct Status Code should be, and I include the correct Status Code in a Response Header called "X-true-status-code". Avi On 7/2/07, Tim Olsen <tolsen718@...> wrote: > On 6/27/07, James Gray II <lists@...> wrote: > > Are there any good rules of thumb about when to use the 204 status? > > Is a response to a DELETE action not an appropriate time to use it? > > RFC 2616 covers both these questions. > > See: > > http://greenbytes.de/tech/webdav/rfc2616.html#status.204 > > http://greenbytes.de/tech/webdav/rfc2616.html#DELETE > > -Tim > -- Avi Flax Lead Technologist arc90 | http://arc90.com
"Mike Dierken" <dierken@...> writes: > You shouldn't validate merely on a GET. You should have a form that > requires explicit confirmation by the user. > The reason is that I can submit someone else's email and the link in > the email might be followed by the curious victim, or even by software > that pre-fetches URLs. I then have a 'confirmed' email but would be > the only one with the credentials. That's a good observation. Why would someone want to do this? I can think of two reasons: - the 'just because' factor, - spamming the email owner with 'newletters' from the website -- but there are better ways to spam someone rather than relying on that someone following the validation link and the frequency of the newsletter. I'm pretty much ruling out anonymity because there are pbetter mechanisms to hide your identity (tor network, throw-away hotmail address, etc.). What other reasons are there? YS.
On 7/3/07, Yohanes Santoso <yahoo-rest-discuss@...> wrote: > What other reasons are there? It seems to be to the benefit of spammers to throw out as much chaff as possible - if they can get legit sites blacklisted as often as possible, the legit sites will take care of the task of getting rid of the blacklist for them. So if you can cheaply feed your whole spam list into web forms, you do it. Also, if you *can* get people to authenticate your web board (for instance) account and it's permitted you to set your own password, then you get to spam the web board. The "just because" factor is surprisingly significant, too.
>
>No it isn't. Stored does not "very unambiguously" mean "replaced". With
>such abstract terms ("store", "replace", "modify", "update" -- elsewhere
>in the spec, PUT is referred to as an "updating" method), only "replaced"
>very unambiguously means "replaced".
>
There's some disagreement here, which stems from the wording of the RFC.
But HTTP is not REST, if we are discussing the semantics of PUT in REST
terms (generic interface) then store means replace in RFC 2616 just like
STOR means replace in RFC 765.
"REST does not restrict communication to a particular protocol, but it
does constrain the interface between components, and hence the scope of
interaction and implementation assumptions that might otherwise be made
between components. For example, the Web's primary transfer protocol is
HTTP, but the architecture also includes seamless access to resources that
originate on pre-existing network servers, including FTP, Gopher, and WAIS.
Interaction with those services is restricted to the semantics of a REST
connector."
This tells me that a REST connector must understand that the semantics of
GET equal the semantics of RETR, APPE=POST, DELE=DELETE, LIST=OPTIONS and
STOR=PUT in order to meet the Uniform Interface constraint. Implementing
PUT with merge semantics may not go against RFC 2616 (although I don't
think that was the intent) but I don't see how it doesn't break REST to
do so. FTP includes methods not included in the semantics of a REST
connector, just as HTTP PATCH has no merge corollary in FTP, but the
meaning of "store" is "replace" in both FTP and HTTP according to REST.
-Eric
On 27 Jun 2007, at 14:40, John Panzer wrote:
> Haven't seen anything further on this, and it's a serious
> question. Anyone?
What about PUTing with the Content-Type: multipart/byteranges [1] ?
If the server don't understand multipart/byteranges, it will say 406
Not Acceptable.
This would also allow uploading several chunks at once.
For "PATCH" functionality - use your own patch-mime-type or include
support for patches in the format. For instance with XML-based formats:
PUT /stuff/13
Content-Type: application/vnd.fish.patch
<p:patches xmlns:p="http://example.org/patch">
<p:replace xpath="//owner">
<owner>
<user>john</john>
</owner>
</p:replace>
</p:patches>
(The patch-mime-type shouldn't be needed if the format itself
supports the patches)
It has already been agreed that since all representations are/can be
partial on GET, so would representations that are uploaded with PUT
often be partial.
[1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.2
--
Stian Soiland, myGrid team
School of Computer Science
The University of Manchester
http://www.cs.man.ac.uk/~ssoiland/
Hello, Apologies if I revive last week's thread about POST being idempotent or not, but it got me confused on the choice between using POST or PUT in the system on which I'm working, and idempotent messages matter in this case. We've designed a system (which I believe complies with the REST principles) that relies on a client interacting with a resource to create other resources. To summarise how this system works, a client can ask the Factory resource to create Item resources. The process of creating an Item resource must follow two constraints: - at the end of the process, the client must have created one Item resource and must know its URI; - at the end of the process, the Factory resource must know that the client knows the new Item resource URI. Initially, we assume the client has been configured to know the URI of the Factory: http://example.org/Factory. The protocol we use is as follows (the numbers in the URIs are just examples): (phase 1: tentative) C: HTTP PUT to http://example.org/Factory containing <newItemReq /> S: HTTP 201 with Location: http://example.org/Item/1 (phase 2: activation) C: HTTP PUT to http://example.org/Item/1 containing <resActivator /> S: HTTP 200 OK. An HTTP GET on http://example.org/Factory returns a list of all the Items that have been created and 'activated' (i.e. of which the Factory knows that the client knows their address). If the Factory never receives the second PUT, the Item resource is not the list document. (Optionally, if the client interacts later with the Item resource, this may activate this resource anyway). Item resources that are not activated may be discarded at any time by the Factory (in which case the client would have to re-start the entire process.) Here is an example of what could go wrong and how this would be handled: (phase 1: tentative) C: HTTP PUT to http://example.org/Factory containing <newItemReq /> S: HTTP 201 with Location: http://example.org/Item/1 - the connection is lost and the new URI never reaches the client. (phase 1: another attempt) C: HTTP PUT to http://example.org/Factory containing <newItemReq /> S: HTTP 201 with Location: http://example.org/Item/2 (phase 2: activation) C: HTTP PUT to http://example.org/Item/2 containing <resActivator /> S: HTTP 200 OK. - the client knows the URI of the Item it has successfully created, and the factory knows that the client knows. The 'activation' PUT is idempotent because sending it N+1 times has the same effect as sending it just once. I also think the first 'tentative' PUT is idempotent, although it is more subtle. Effectively, whatever the URI returned in the tentative phase is does not matter, either to the client or to the factory. When taking into account the two phases, sending N+1 'tentative' PUT + an 'activation' has the same result as sending only one tentative PUT followed by an activation: only one Item resource is activated and both the client and the server know about its URI. When reading Section 9.5 and 9.6 of RFC 2616, my use of PUT here is not appropriate, and it should probably be POST, in both cases, for two different reasons: - the 'tentative' PUT clearly does not comply with "the URI in a PUT request identifies the entity enclosed with the request" (in Section 9.6); - the 'activation' PUT might rather be considered as an "annotation of existing resources" (in Section 9.5). I'm tempted to change these two PUTs into POSTs. However, I quite like the fact that PUT is intended to be idempotent. I believe the fact that a request is guaranteed to be idempotent is more important than "the URI in a PUT request identifies the entity enclosed with the request" when designing distributed systems. Obviously, the use of POST in this system may be idempotent, but it appears to me that it's a constraint that deserves to be given more importance, by using PUT. (By the way, to refer to last week's thread, my understanding of POST not being idempotent is that N+1 times the same request may or may not have the same effect as just one.) I'm not sure which one is right between PUT and POST (in both cases), although I tend to think at least the 'tentative' PUT ought to be a POST. Any comments appreciated. Best wishes, Bruno.
i'd consider modeling it like this: -- create a new (inactive) request C: POST http://example.org/Factory <newItemReq /> S: HTTP 200 with <itemReq /> document and Location:http://example.org/Item/1 -- activate existing request C: HTTP PUT to http://example.org/Item/1 containing modified <itemReq /> that has the activate information S: HTTP 200 OK. S: HTTP 404 Not Found (for invalid request ids) S: HTTP 410 Gone (for expired/lost requests) mamund > (phase 1: tentative) > C: HTTP PUT to http://example.org/Factory containing <newItemReq /> > S: HTTP 201 with Location: http://example.org/Item/1 > > (phase 2: activation) > C: HTTP PUT to http://example.org/Item/1 containing <resActivator /> > S: HTTP 200 OK. <snip> On 7/5/07, Bruno Harbulot <Bruno.Harbulot@...> wrote: > Hello, > > > Apologies if I revive last week's thread about POST being idempotent or > not, but it got me confused on the choice between using POST or PUT in > the system on which I'm working, and idempotent messages matter in this > case. > </snip>
When you use PUT, the URI sent in the request is the identifier of the content within that message. So doing the following would replace the 'factory' with a 'newItem' blob: C: HTTP PUT to http://example.org/Factory containing <newItemReq /> One difference between PUT and POST is that with PUT the client already knows the intended URI (the 'id') of the data being submitted. With POST, the server might create a new identifier for the submitted data and the clients needs to receive the response to learn what that identifier is. In order to activate the resource (acknowledge to the server that the client received the response) you are sending a PUT to the new resource with a particular content type. Sending different content types via PUT gives rise to really long discussion threads on rest-discuss and I generally avoid that. Perhaps you could request a new blank resource be created via a POST to the factory, then populate that blank resource via a PUT with the actual content to be stored - this initializing PUT on a blank resource would be idempotent so simple retries would help get past some network failures. If the factory POST didn't work, the client can retry but a different identifier would be created in that case - which is similar to how your design allows for un-acknowledged resouces to fade away and similar to what Mike Amundsen suggested as well. > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of Bruno Harbulot > Sent: Thursday, July 05, 2007 11:48 AM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] PUT or POST, idempotent for the application > > Hello, > > > Apologies if I revive last week's thread about POST being > idempotent or not, but it got me confused on the choice > between using POST or PUT in the system on which I'm working, > and idempotent messages matter in this case. > > > We've designed a system (which I believe complies with the REST > principles) that relies on a client interacting with a > resource to create other resources. > > > To summarise how this system works, a client can ask the > Factory resource to create Item resources. The process of > creating an Item resource must follow two constraints: > - at the end of the process, the client must have created > one Item resource and must know its URI; > - at the end of the process, the Factory resource must > know that the client knows the new Item resource URI. > > Initially, we assume the client has been configured to know > the URI of the Factory: http://example.org/Factory. > > The protocol we use is as follows (the numbers in the URIs are just > examples): > > (phase 1: tentative) > C: HTTP PUT to http://example.org/Factory containing <newItemReq /> > S: HTTP 201 with Location: http://example.org/Item/1 > > (phase 2: activation) > C: HTTP PUT to http://example.org/Item/1 containing <resActivator /> > S: HTTP 200 OK. > > > An HTTP GET on http://example.org/Factory returns a list of > all the Items that have been created and 'activated' (i.e. of > which the Factory knows that the client knows their address). > If the Factory never receives the second PUT, the Item > resource is not the list document. (Optionally, if the client > interacts later with the Item resource, this may activate > this resource anyway). Item resources that are not activated > may be discarded at any time by the Factory (in which case > the client would have to re-start the entire process.) > > Here is an example of what could go wrong and how this would > be handled: > > (phase 1: tentative) > C: HTTP PUT to http://example.org/Factory containing <newItemReq /> > S: HTTP 201 with Location: http://example.org/Item/1 > - the connection is lost and the new URI never reaches the client. > > (phase 1: another attempt) > C: HTTP PUT to http://example.org/Factory containing <newItemReq /> > S: HTTP 201 with Location: http://example.org/Item/2 > > (phase 2: activation) > C: HTTP PUT to http://example.org/Item/2 containing <resActivator /> > S: HTTP 200 OK. > - the client knows the URI of the Item it has successfully > created, and the factory knows that the client knows. > > > > The 'activation' PUT is idempotent because sending it N+1 > times has the > same effect as sending it just once. > > I also think the first 'tentative' PUT is idempotent, although it is > more subtle. Effectively, whatever the URI returned in the tentative > phase is does not matter, either to the client or to the > factory. When > taking into account the two phases, sending N+1 'tentative' PUT + an > 'activation' has the same result as sending only one tentative PUT > followed by an activation: only one Item resource is > activated and both > the client and the server know about its URI. > > > When reading Section 9.5 and 9.6 of RFC 2616, my use of PUT > here is not > appropriate, and it should probably be POST, in both cases, for two > different reasons: > - the 'tentative' PUT clearly does not comply with "the > URI in a PUT > request identifies the entity enclosed with the request" (in > Section 9.6); > - the 'activation' PUT might rather be considered as an > "annotation > of existing resources" (in Section 9.5). > > > > I'm tempted to change these two PUTs into POSTs. However, I > quite like > the fact that PUT is intended to be idempotent. I believe the > fact that > a request is guaranteed to be idempotent is more important > than "the URI > in a PUT request identifies the entity enclosed with the > request" when > designing distributed systems. > Obviously, the use of POST in this system may be idempotent, but it > appears to me that it's a constraint that deserves to be given more > importance, by using PUT. (By the way, to refer to last > week's thread, > my understanding of POST not being idempotent is that N+1 > times the same > request may or may not have the same effect as just one.) > > > I'm not sure which one is right between PUT and POST (in both cases), > although I tend to think at least the 'tentative' PUT ought to be a > POST. Any comments appreciated. > > > Best wishes, > > > Bruno. > > > > Yahoo! Groups Links > > >
* Eric J. Bowman <eric@...> [2007-07-02 13:50]: > What I want to do is have a second DELETE on the same URL erase > the file, thus changing the response code to 404. But if two > users DELETE the same resource, the inadvertent result (which > neither user intends) is the 404 response on a subsequent GET. > My DELETE implementation is non-idempotent because the > side-effects of a second DELETE request identical to the first > DELETE request are different. Even though each DELETE is > idempotent by itself, the sequence is not. Having people DELETE a URI which the server says is 410 is unusual enough; it doesn’t really matter if it becomes a bit more peculiar, so I’d just reject any DELETE to a 410 URI that lacks an If-Match header. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Eric J. Bowman <eric@...> [2007-07-02 23:45]: > Hmmm, I don't think it would be right for a PUT to change the > status of a resource from 200 to 410. Why not? The response to the PUT would be 2xx, of course, but it’s well within the server’s rights to repond 4xx to subsequent requests. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi Bruno,
* Bruno Harbulot <Bruno.Harbulot@...> [2007-07-05 20:50]:
> I'm tempted to change these two PUTs into POSTs. However, I
> quite like the fact that PUT is intended to be idempotent. I
> believe the fact that a request is guaranteed to be idempotent
> is more important than "the URI in a PUT request identifies the
> entity enclosed with the request" when designing distributed
> systems.
> Obviously, the use of POST in this system may be idempotent,
> but it appears to me that it's a constraint that deserves to be
> given more importance, by using PUT. (By the way, to refer to
> last week's thread, my understanding of POST not being
> idempotent is that N+1 times the same request may or may not
> have the same effect as just one.)
>
> I'm not sure which one is right between PUT and POST (in both cases),
> although I tend to think at least the 'tentative' PUT ought to be a
> POST. Any comments appreciated.
See http://bitworking.org/news/201/RESTify-DayTrader
Short answer: the client POSTs a request, upon which the server
creates a ticket of the form
{$uuid}:{sha1(concat($uuid,$some_secret))}
The response contains this ticket in the Location of the form
http://example.org/Item/{$ticket}
which the client is expected to PUT to.
Validating such a ticket relies on knowing the secret only, so
the server can do it without the necessity for any per-ticket
storage.
You could use GET with this scheme if you really wanted to make
the first request idempotent, but then I’d worry about broken
intermediaries that might cache the response despite any number
of headers to the contrary. POST provides an easy reliable way
to pierce caches.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
> >Having people DELETE a URI which the server says is 410 is >unusual enough; it doesn't really matter if it becomes a bit more >peculiar, so I'd just reject any DELETE to a 410 URI that lacks >an If-Match header. > OK, yeah -- I like that better than what I had. Thanks! -Eric
> >Why not? The response to the PUT would be 2xx, of course, but >it's well within the server's rights to repond 4xx to subsequent >requests. > According to RFC 2616, sure. But in REST terms you've just changed the semantics of PUT from "replace" to "remove", breaking REST's uniform interface constraint. If the interaction is designed to change the state of a resource from "exists" to "does not exist" then the semantics are "remove", not "replace". If the intention of the interaction is "remove", then the appropriate method is DELETE. If a server wants to accepts a PUT with no content, whether the resource exists or not, the resulting zero-byte file should result in a 204 response to subsequent GET requests, IMHO. -Eric
A. Pagaltzis wrote: > * Eric J. Bowman <eric@...> [2007-07-02 13:50]: >> What I want to do is have a second DELETE on the same URL erase >> the file, thus changing the response code to 404. But if two >> users DELETE the same resource, the inadvertent result (which >> neither user intends) is the 404 response on a subsequent GET. >> My DELETE implementation is non-idempotent because the >> side-effects of a second DELETE request identical to the first >> DELETE request are different. Even though each DELETE is >> idempotent by itself, the sequence is not. > > Having people DELETE a URI which the server says is 410 is > unusual enough; it doesn’t really matter if it becomes a bit more > peculiar, so I’d just reject any DELETE to a 410 URI that lacks > an If-Match header. Yes. This confuses me considerably. Let's ignore the zero-length files (implementation detail) and look at the resource mantipulations. 1. Resource Exists. 2. DELETE -> Resource doesn't exist. 3. DELETE -> Resource exists even less? The state of a resource returning 404 or 410 is identical - null. All that is different is metadata about a resource that used to exist. This is primarily a convenience to a client trying to understand why a GET or POST failed. If you want to manipulate the metadata of a resource that doesn't exist you'd be best to make that metadata a resource in itself.
* Eric J. Bowman <eric@...> [2007-07-06 10:35]: > >Why not? The response to the PUT would be 2xx, of course, but > >it's well within the server's rights to repond 4xx to > >subsequent requests. > > According to RFC 2616, sure. But in REST terms you've just > changed the semantics of PUT from "replace" to "remove", > breaking REST's uniform interface constraint. If the > interaction is designed to change the state of a resource from > "exists" to "does not exist" then the semantics are "remove", > not "replace". If the intention of the interaction is "remove", > then the appropriate method is DELETE. > > If a server wants to accepts a PUT with no content, whether the > resource exists or not, the resulting zero-byte file should > result in a 204 response to subsequent GET requests, IMHO. Point taken, you’re right. I’ll insist on nitpicking a bit though: the method is about the client’s intention, not about the server’s implementation, so it’s *still* fine to have a resource return 4xx subsequently to a successful PUT – *as long* as you’re not expecting clients to say PUT when they mean DELETE. Which was the case here. In fact I’m not sure I can think of a legitimate scenario where my pedantic qualification applies. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 7/6/07, Eric J. Bowman <eric@...> wrote: > According to RFC 2616, sure. But in REST terms you've just changed > the semantics of PUT from "replace" to "remove", breaking REST's > uniform interface constraint. I think, but cannot prove, that there's an important yet subtle distinction between "delete this thingy" and "make a change to this thingy (that happens to result in the server choosing not to serve it anymore)." I mean, what if one of the things I PUT is an expiration date? That could, sooner or later, result in a 4xx result, but it's still different from a simple "zap this, mmkay?" request. Indeed, the initial POST could set it up for a 4xx. With that said, if the first "deletion" isn't a full deletion, maybe PUTting a blank isn't the right thing to do anyway?
> >With that said, if the first "deletion" isn't a full deletion, maybe >PUTting a blank isn't the right thing to do anyway? > When the resource status changes from 200 to 404 or 410, the resource *is* deleted from the perspective of the client. The fact that the source file still exists is neither here nor there, there is no requirement that the server delete _anything_ in response to a DELETE method, the only requirement is that a resource that has been successfully deleted responds 4xx instead of 2xx. You make a good point that a PUT or a POST can set an expiration date for a resource, in which case it switches from 2xx to 4xx without a DELETE, but in my case I see no reason to add complexity when I simply want a DELETE to change a resource's status from 200 to 410. The debate here is about changing a resource's status from 410 to 404 using HTTP. -Eric
> >Let's ignore the zero-length files (implementation detail) and look at >the resource mantipulations. > >1. Resource Exists. >2. DELETE -> Resource doesn't exist. >3. DELETE -> Resource exists even less? > On my workstation, I'm always deleting files twice. The first time I delete a file, the file itself is not altered or moved, yet now it appears in my "trash" until I delete it for a second time. When the file is in the trash, its status is "used to exist" but after it's been deleted from the trash its status is "not known to have existed." I'm simply applying this everyday computing paradigm to an HTTP server. > >The state of a resource returning 404 or 410 is identical - null. > Granted, the state of the resource between those messages is identical, yet those two messages have different meaning. It is reasonable to provide the user with a means to change from one to the other, just like it is reasonable for a file system to implement a trashcan to distinguish between "used to exist" and "not known to have existed" as a result of user action. > >All that is different is metadata about a resource that used to exist. >This is primarily a convenience to a client trying to understand why a >GET or POST failed. > >If you want to manipulate the metadata of a resource that doesn't exist >you'd be best to make that metadata a resource in itself. > I don't follow. There is no requirement that a resource must first exist, before I can manipulate it. I have an URL, dereferencing it yields a response code indicating the resource doesn't exist. That doesn't mean I can't send that URL an HTTP request. The objection here seems limited to the fact that the state of the resource doesn't change as the result of the second DELETE, only the response code. But I'm not seeing where that requirement is set in stone. OTOH, I see where REST is flexible enough to allow innovation, even if the result is unorthodox, because orthodoxy is in the eye of the beholder. What matters, is whether or not I've violated a standard or broken a constraint of REST. I don't believe I have. -Eric
Eric J. Bowman wrote: >> Let's ignore the zero-length files (implementation detail) and look at >> the resource mantipulations. >> >> 1. Resource Exists. >> 2. DELETE -> Resource doesn't exist. >> 3. DELETE -> Resource exists even less? >> > > On my workstation, I'm always deleting files twice. The first time I > delete a file, the file itself is not altered or moved, yet now it > appears in my "trash" until I delete it for a second time. When the > file is in the trash, its status is "used to exist" but after it's > been deleted from the trash its status is "not known to have existed." > > I'm simply applying this everyday computing paradigm to an HTTP server. > In that case the resource should be moved to a new "trashcan" resource and the first DELETE should return a 301 or other redirect pointing to said resource. The 2nd DELETE should be directed to said trashcan and the file could then be completely purged (204). Explicitely DELETING the same resource twice is counter-intuitive and IMO breaks idempotence. -- Aaron Dalton | Super Duper Games aaron@... | http://superdupergames.org
> >In that case the resource should be moved to a new "trashcan" resource >and the first DELETE should return a 301 or other redirect pointing to >said resource. The 2nd DELETE should be directed to said trashcan and >the file could then be completely purged (204). Explicitely DELETING >the same resource twice is counter-intuitive and IMO breaks idempotence. > Can break idempotence, yes, but that's already been fixed as I explained in another post. If you would like, you can explain exactly how the behavior I have implemented is not idempotent and I'm all ears. In the meantime, I make the same objection to giving DELETE the semantics of "move" by having subsequent GET requests respond with a redirect, as I make for using PUT to implement "remove" semantics. To clarify, I am not implementing a trashcan. I am implementing a method whereby users of the service can toggle an URL between "used to exist" and "not known to have existed" and I have done so with an absolute minimum of complexity. There is no provision to restore the file -- besides PUTting it back from the client side. I point out, that the trashcan on my file system does not involve moving any files. It simply marks them as deleted, until they are marked as undeleted or deleted for a second time (by emptying the trash). When I try to access a deleted resource, I get an error message, not a redirect to my trashcan. -Eric
On 7/6/07, Eric J. Bowman <eric@...> wrote: > You make a good point that a PUT or a POST can set an expiration date for > a resource, in which case it switches from 2xx to 4xx without a DELETE, > but in my case I see no reason to add complexity when I simply want a > DELETE to change a resource's status from 200 to 410. The debate here is > about changing a resource's status from 410 to 404 using HTTP. See, that's the part that seems weird to me: how can you delete something that doesn't exist? Maybe it would help if you explained (or refreshed my memory) what the distinction is. I'm also not sure the codes are right, the more I look at them. Doesn't 410 imply that the resource once existed, where 404 is more vague? If so, then wouldn't 410 mean "It was definitely here, but all I find now is a zero-byte file" where 404 would mean "I can't find any trace now, or at least none I'm willing to admit to the likes of you"? And re your trashcan parallel: that's perhaps not a good analogy, since trashcan files are intact in every way except for being flagged as deleteable. I'm not sure of the exact details at the filesystem level, but based on its behavior I'd say that's more analogous to a resource having a "deleted" property (or perhaps a "trashcan" location, with its original location still stored somehow") but for which HTTP (the "filesystem" level) replies with a 2xx-level response (since you can, in Windows at least, muck about with files in the trashcan to a certain level), and your client (a la Windows (not Internet) Explorer) simply agrees that that location is "special." I don't know if that's applicable to your situation, but that's how I'd do a two-step delete (the PUT and DELETE I mentioned initially).
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > There's some disagreement here, which stems from the wording of the RFC. > But HTTP is not REST, if we are discussing the semantics of PUT in REST > terms (generic interface) then store means replace in RFC 2616 just like > STOR means replace in RFC 765. > > "REST does not restrict communication to a particular protocol, but it > does constrain the interface between components, and hence the scope of > interaction and implementation assumptions that might otherwise be made > between components. For example, the Web's primary transfer protocol is > HTTP, but the architecture also includes seamless access to resources that > originate on pre-existing network servers, including FTP, Gopher, and WAIS. > Interaction with those services is restricted to the semantics of a REST > connector." > > This tells me that a REST connector must understand that the semantics of > GET equal the semantics of RETR, APPE=POST, DELE=DELETE, LIST=OPTIONS and > STOR=PUT in order to meet the Uniform Interface constraint. Implementing > PUT with merge semantics may not go against RFC 2616 (although I don't > think that was the intent) but I don't see how it doesn't break REST to > do so. FTP includes methods not included in the semantics of a REST > connector, just as HTTP PATCH has no merge corollary in FTP, but the > meaning of "store" is "replace" in both FTP and HTTP according to REST. Wow! That's quite inventive reasoning, but I believe it is exactly backwards. The passage you cite from the REST thesis tells me the exact opposite: the protocols for FTP, Gopher, and WAIS must be "restricted to the semantics of [] REST" -- not the other way around. So just because FTP's STOR method might be (but does not have to be) mapped to PUT (thus giving PUT in this particular case replacement semantics), this doesn't imply that PUT must be restricted to replacement semantics in all cases. In other words, just because for some applications of HTTP the PUT method is described as having replacement semantics, this provides no evidence that RFC 2616 itself always restricts PUT to such semantics. By your argument, the mapping of POST to FTP's APPE is evidence that RFC 2616 intends POST to have ONLY "append semantics," which is clearly a wrong argument. The REST thesis passage that I think is more directly relevant to the replacement vs. modification semantics issue (though by no means definitive) is the following (section 6.2.3 Remote Authoring): The resource is not thestorage object. The resource is not a mechanism that the server uses to handlethe storage object. The resource is a conceptual mapping — the server receivesthe identifier (which identifies the mapping) and applies it to its currentmapping implementation (usually a combination of collection-specific deep treetraversal and/or hash tables) to find the currently responsible handlerimplementation and the handler implementation then selects the appropriateaction+response based on the request content. All of theseimplementation-specific issues are hidden behind the Web interface; their naturecannot be assumed by a client that only has access through the Web interface. [emphasis added] This suggests to me that attempts to restrict the semantics of HTTP methods specifically and RESTful methods generally to storage-centric semantics (like requiring PUT to always have replacement semantics) are misguided. Now some might argue that loosening the semantics of PUT makes it indistinguishable from POST. But they would be wrong. If Section 9.6 is clear about anything, it is clear (or at least clearer) about the relationship between PUT and POST: The fundamental difference between the POST and PUTrequests is reflected in the different meaning of the Request-URI. The URI in aPOST request identifies the resource that will handle the enclosed entity. Thatresource might be a data-accepting process, a gateway to some other protocol,or a separate entity that accepts annotations. In contrast, the URI in a PUT requestidentifies the entity enclosed with the request -- the user agent knows whatURI is intended and the server MUST NOT attempt to apply the request to someother resource. [emphasis added] So the difference between PUT and POST is not in the different meaning of their respective methods; it is only in the difference between the relationship of the method (whatever its semantics: process, replace, append, modify) and the URI to which the method will be "applied". So a client should not infer anything from the use of the method PUT on a URI beyond what the client should infer from a POST on the same URI, EXCEPT that in the former case, the client KNOWS that the request will be applied to the resource identified by the provided URI and not to some other resource. So PUT merely means "act upon the identified resource itself", while POST means "use the identified resource to act upon other resources". Sections 6.2.1 and 6.2.2 reinforce my belief that REST's goal is to generalize the concept of resource and resource manipulation (via methods) AWAY FROM a restrictive file/document storage model to a more powerful (but more vague) concept of resource mapping: "REST’sdefinition of resource derives from the central requirement of the Web: independentauthoring of interconnected hypertext across multiple trust domains. Forcing theinterface definitions to match the interface requirements causes the protocolsto seem vague, but that is only because the interface being manipulated is onlyan interface and not an implementation." [emphasis added]. PS: I find this one of the most cryptic sentences in the Thesis, since Roy does not to a step-by-step analysis of how the interface requirements (aka the four interface constraints listed in 5.1.5) are actually applied to interface definitions for GET, PUT, POST and DELETE. -- Nick
* Elliotte Harold <elharo@...> [2007-06-22 14:00]: > XML and its tree structures are not a perfect representation of > human knowledge and the information we need to encode. However, > precisely because XML is less structured than maps and lists > and tables, it can handle more information than can be encoded > in maps and lists and tables. There are many, many examples > where JSON (and other map-list data structures) becomes > practically unmanageable but which XML handles without > blinking. However it's not a two-way street. XML cannot only > encode everything JSON can encode. It can do so practically, > usefully, and efficiently. The reverse is not true. The reverse *is* true. JSON can encode highly structured data more practically, usefully and efficiently than XML ever has. JSON targets a much smaller problem domain than XML, but that is precisely why it can work in that domain much better than XML could ever hope to. And I say that just a day after defending XML against a haters and a dogpile of JSON apologists in a weblog. Both technologies have their applications. Constraints and restrictions are great. Hey, isn’t this rest-discuss? You’d think this point wouldn’t need making in this crowd… Considering that you were only just fervently talking about how database people tend to see tables everywhere, it surprises me that you go on to say to XML should be your hammer and every problem a nail. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Steve Bjorg <steveb@...> [2007-06-11 06:35]: > On Jun 10, 2007, at 9:24 PM, Josh Sled wrote: > > Not quite. JSON is the subset of JavaScript that is the > > simple notation for representing structured data. That > > contains strings, numbers, booleans, and lists and maps > > thereof. If you look around, you'll notice that pretty much > > every programming language has these constructs, and that is > > not by coincidence. > > > > The value of JSON has not much to do with JavaScript, and > > everything to do with generality of structured (and basically > > Typed) data. > > So do all functional programming languages. So why JSON instead > of ML? JSON has everything to do with ECMAscript. S-exprs need convention to represent hashmaps; they are not native part of the syntax. * Elliotte Harold <elharo@...> [2007-06-23 13:00]: > Depends on what you mean by a programming language, Contemporary dynamic languages. Perl, Python, Ruby, Javascript and a host of lesser-known others. The stuff where all the action is happening these days. (I know you are a Java guy; sorry there.) > Semistructured is better than structured, but we won't have > achieved real info nirvana until we learn how to manage > unstructured data. When that is achieved, computer science > will have finally grown up. The ’50s called, they want their AI slogans back. :-) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> >See, that's the part that seems weird to me: how can you delete >something that doesn't exist? Maybe it would help if you explained (or >refreshed my memory) what the distinction is. > "A resource can map to the empty set, which allows references to be made to a concept before any realization of that concept exists..." Resources never exist, they're a conceptual mapping. Even undefined resources have representations -- a 404 error is a representation of the state of the identified resource as being undefined. IOW, I am disavowing my previous statement that a 404 and a 410 represent the same resource state. They do not. An undefined resource still has state when it's dereferenced, otherwise no 404 representation could be returned for bogus requests. "Depending on the message control data, a given representation may indicate the current state of the requested resource, the desired state for the requested resource, or the value of some other resource, such as a representation of the input data within a client's query form, or a representation of some error condition for a response." (REST) So a 404 response to a GET request indicates that the state of the identified resource is "not known to exist", while a 410 response indicates the state of the resource is "known to have existed". I am using an HTTP DELETE request to change the state of the resource from "exists" to "known to have existed" by manipulating a representation of the defined resource. To change the state of the resource from "known to have existed" to "not known to exist" I send a DELETE requesting the server remove all recollection of the resource having once existed -- the semantics in both cases are "remove". The status code of the response to subsequent GET requests, i.e. the representation of resource state, has changed through my manipulation of a representation of the now- undefined resource through a generic REST interface. > >I'm also not sure the codes are right, the more I look at them. >Doesn't 410 imply that the resource once existed, where 404 is more >vague? If so, then wouldn't 410 mean "It was definitely here, but all >I find now is a zero-byte file" where 404 would mean "I can't find any >trace now, or at least none I'm willing to admit to the likes of you"? > The 410 response after one DELETE on my server indicates that the resource "used to exist", not that the file is zero bytes -- that information is opaque to the client. The zero-byte thing is just my implementation, it certainly doesn't redefine the 410 response as meaning anything about the status of the source file. The 410 representation is merely a status code informing the client that the resource "used to exist". The server could use some other mechanism to mark the mapping of URI to source file as "used to exist", there is no requirement for any file operation to occur on the source file. In fact, there is no requirement that the state of the resource be changed at all in response to DELETE. But the server must retain some knowledge of the once-existent resource in order for a 410 response to even be possible, right? Any way I look at it, a 410 requires a semantic mapping from the resource identifier to some sort of source, and that mapping is fair game for a DELETE request. The second DELETE is wholly optional for the system administrator to override the default 410 response with a 404 response, by requesting that the conceptual mapping from request URL to 410-response-code- inducing source (whatever that source may be, in my case, a zero- byte file) be removed. The state of the resource changes from "known to have existed" (which requires some sort of conceptual mapping) to "not known to exist" which requires no conceptual mapping. "It is not necessary to mark all permanently unavailable resources as 'gone' or to keep the mark for any length of time -- that is left to the discretion of the server owner." (RFC 2616) This implies that a 410 response is "marked" and I assert that such a mark is a perfectly valid target for DELETE. > >And re your trashcan parallel: that's perhaps not a good analogy, >since trashcan files are intact in every way except for being flagged >as deleteable. > Which makes my analogy fail why, exactly? DELETE imposes no requirement on server behavior. As the server implementer I am free to move the source file, delete the source file, "mark" the source file or implement some sort of lookup table bearing no relation to the source file, and I am free to decide whether to use 404, 410 or both over time to indicate a successful DELETE. Except for those of us who do not use trashcan, we are all intuitively using two-stage deletes which toggle the state of the delete target from "exists" to "used to exist" followed by "nonexistent". As you point out, there are a variety of ways to implement this on the filesystem level, just as there are a variety of ways to implement this on the HTTP level. > >I don't know if that's applicable to your situation, but that's how >I'd do a two-step delete (the PUT and DELETE I mentioned initially). > Sorry, but if your PUT results in a subsequent GET returning a 404 or a 410 response then you have still broken the uniform interface constraint of REST by defining PUT as "move" or "remove" instead of "replace". -Eric
Eric J. Bowman wrote: > > > > > >See, that's the part that seems weird to me: how can you delete > >something that doesn't exist? Maybe it would help if you explained (or > >refreshed my memory) what the distinction is. > > > > "A resource can map to the empty set, which allows references to be made > to a concept before any realization of that concept exists..." > > Resources never exist, they're a conceptual mapping. Even undefined > resources have representations -- a 404 error is a representation of > the state of the identified resource as being undefined. IOW, I am Nope. 404 means no representation is available. > disavowing my previous statement that a 404 and a 410 represent the > same resource state. They do not. An undefined resource still has > state when it's dereferenced, otherwise no 404 representation could > be returned for bogus requests. Please define "undefined resource". > "Depending on the message control data, a given representation may > indicate the current state of the requested resource, the desired state > for the requested resource, or the value of some other resource, such > as a representation of the input data within a client's query form, or a > representation of some error condition for a response." (REST) > > So a 404 response to a GET request indicates that the state of the > identified resource is "not known to exist", while a 410 response > indicates the state of the resource is "known to have existed". I am > using an HTTP DELETE request to change the state of the resource from > "exists" to "known to have existed" by manipulating a representation > of the defined resource. > > To change the state of the resource from "known to have existed" to > "not known to exist" I send a DELETE requesting the server remove all > recollection of the resource having once existed -- the semantics in > both cases are "remove". The status code of the response to > subsequent GET requests, i.e. the representation of resource state, > has changed through my manipulation of a representation of the now- > undefined resource through a generic REST interface. Sorry, I think you're reading things into RFC2616 which just aren't there. 410 is a special case of 404, "not found". That's all. > >I'm also not sure the codes are right, the more I look at them. > >Doesn't 410 imply that the resource once existed, where 404 is more > >vague? If so, then wouldn't 410 mean "It was definitely here, but all > >I find now is a zero-byte file" where 404 would mean "I can't find any > >trace now, or at least none I'm willing to admit to the likes of you"? > > > > The 410 response after one DELETE on my server indicates that the > resource "used to exist", not that the file is zero bytes -- that > information is opaque to the client. The zero-byte thing is just my 404 is different from 200 + empty body, just like an empty file is different from an absent file. > ... At this point I'm really confused about the whole point of this discussion. Are you proposing this as a general solution? To what problem? Best regards, Julian
* Bill de hOra <bill@...> [2007-06-05 23:50]: > Patrick Mueller wrote: > > > > I'm not a fan of JSON because it's JavaScript; I'm a fan of > > JSON because it's a data structure. There are plenty of JSON > > munching libraries available for various languages; scroll to > > the bottom of http://json.org/ > > Sounds like XSD+SOAP. Except that JSON interoperates and requires a fraction of the resources of an XML parser, much less an entire SOAP stack. There are only a few, universally understood atomic data types and a trivially simple grammar. Many JSON parsers begin life being coded against View Source. I think that says something. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Eric J. Bowman <eric@...> [2007-07-06 21:35]:
> * Aaron Dalton <aaron@...> [2007-07-06 21:35]:
> > In that case the resource should be moved to a new "trashcan"
> > resource and the first DELETE should return a 301 or other
> > redirect pointing to said resource. The 2nd DELETE should be
> > directed to said trashcan and the file could then be
> > completely purged (204). Explicitely DELETING the same
> > resource twice is counter-intuitive and IMO breaks
> > idempotence.
>
> I make the same objection to giving DELETE the semantics of
> "move" by having subsequent GET requests respond with a
> redirect, as I make for using PUT to implement "remove"
> semantics.
You are not supposed to return a redirect on subsequent GET, only
in response to DELETE.
> To clarify, I am not implementing a trashcan. I am implementing
> a method whereby users of the service can toggle an URL between
> "used to exist" and "not known to have existed" and I have done
> so with an absolute minimum of complexity.
That’s fine. You don’t have to have a trashcan. You just have to
expose the deletedness of a resource as a separate resource.
> GET /foo/bar
< 200 OK
> DELETE /foo/bar
< 301 Moved Permanently
< Location: /deleted/foo/bar
> GET /foo/bar
< 410 Gone
> DELETE /foo/bar
< 301 Moved Permanently
< Location: /deleted/foo/bar
> DELETE /deleted/foo/bar
< 204 No Content
> GET /foo/bar
< 404 Not Found
Note how DELETEs to either resource are effortlessly idempotent
in this protocol design, with no concurrency issues. This is
clearly even less complex than what you have now.
It also doesn’t require DELETEs to 4xx resources, which feels
very strange from an interface uniformity point of view anyway.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
* A. Pagaltzis <pagaltzis@...> [2007-07-06 23:30]: > > DELETE /foo/bar > < 301 Moved Permanently > < Location: /deleted/foo/bar Actually, 301 is wrong, sorry. I think 303 is the right one here, per my reading of the spec. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> >Wow! That's quite inventive reasoning, but I believe it is exactly backwards. > Sorry, but your response has it exactly backwards, and I don't believe you even read my post, as you are reversing the meaning of my words in order to make your point. REST defines an architectural style for building distributed hypermedia applications which may use HTTP. This places constraints on HTTP which are not written into RFC 2616. Uniform Interface comes to mind -- where does RFC 2616 impose such a restriction? It doesn't, which is why I continually stress that HTTP is not REST. > >The passage you cite from the REST thesis tells me the exact opposite: the >protocols for FTP, Gopher, and WAIS must be "restricted to the semantics of >[] REST" -- not the other way around. So just because FTP's STOR method might >be (but does not have to be) mapped to PUT (thus giving PUT in this >particular case replacement semantics), this doesn't imply that PUT must be >restricted to replacement semantics in all cases. > So you're saying that FTP, Gopher and WAIS must be restricted to the semantics of a REST connector in a RESTful app, but that app is free to redefine those semantics when HTTP is the protocol? I think not. That passage states that seamless back-compat with older protocols is possible, but only if the application's connectors implement a generic interface. Therefore, if the connector semantics in a given app (regardless of protocol choice) are _not_ seamlessly backwards-compatible with FTP, Gopher and WAIS then the connector semantics don't meet the "principle of generality" (which is all about re-use) so critical to REST. The semantics of a REST connector may include "retrieve", "replace", "remove", "append" and "info". This does not mean they can't include "merge", but it does mean you'll need a different method (HTTP PATCH comes to mind) to implement "merge" because HTTP PUT is already clearly taken to mean "replace" in a REST application. If you bork PUT into "merge" semantics *and* PUT is the only method with "replace" semantics (Is there any dispute about that?) then you have assigned two different meanings to HTTP PUT. While this may be allowed under RFC 2616 (I still don't believe that was the intent), this clearly breaks REST's Uniform Interface constraint. The problem, if you define PUT as "merge", becomes one of how does a client or intermediary infer that the request does not mean the same thing as STOR, except by making unrestricted assumptions about the implementation of the PUT interaction? This is not a function of media type, this is a function of the REST network API, i.e. connector. > >In other words, just because for some applications of HTTP the PUT method >is described as having replacement semantics, this provides no evidence that >RFC 2616 itself always restricts PUT to such semantics. > Where have I ever made the claim that REST is defined by RFC 2616? This is exactly the opposite of the point I repeatedly try to make, which is that REST places constraints on the implementation of HTTP which are not part of HTTP in any by-the-letters interpretation of RFC 2616. POST tunneling of PUT request using a made-up "x-no-really" extension header is perfectly legit under RFC 2616, because nowhere in RFC 2616 is the "universal interface constraint" written. > >By your argument, the mapping of POST to FTP's APPE is evidence that >RFC 2616 intends POST to have ONLY "append semantics," which is clearly a >wrong argument. > This is by your blatant misrepresentation of my argument, or a misreading of RFC 2616, there is nothing in HTTP which "clearly" states that POST means anything _but_ annotate or append. Roy has clearly stated, in this group, that POST means annotate / append in terms of REST. What I assert, is that the Uniform Interface constraint in REST, when applied to HTTP and FTP as described in the dissertation, clearly requires a direct mapping between STOR and PUT because that "socket" on the network API (REST connector) means "replace". The principle of generality also requires the other mappings I indicate. The REST connector has a "socket" with "append" semantics, the most-appropriate method from any protocol used in the application must "plug in" to the universal interface: APPE or POST. -Eric
> >> > DELETE /foo/bar >> < 301 Moved Permanently >> < Location: /deleted/foo/bar > >Actually, 301 is wrong, sorry. I think 303 is the right one here, >per my reading of the spec. > Sorry, but if I make a DELETE request I should get back 2xx for success, or 4xx (or 501) for failure. The only time I should get a 3xx response is if the resource I am requesting the DELETE on has been moved, otherwise the semantics of DELETE are changed from "remove" to "move" even if it's "move to trashcan". -Eric
On 7/6/07, Eric J. Bowman <eric@...> wrote: > "A resource can map to the empty set, which allows references to be made > to a concept before any realization of that concept exists..." "Before," sure; otherwise the whole notion of POSTing a new resource (to other than a factory) can't work. And you *can* refer to it after, of course, otherwise there wouldn't be a 404/410/etc. distinction at all. But actually, I meant explain what the distinction was in your app. That is, your reasoning for needing both 404 and 410 - what it does for the client. > The 410 response after one DELETE on my server indicates that the > resource "used to exist", not that the file is zero bytes -- that > information is opaque to the client. The zero-byte thing is just my > implementation, it certainly doesn't redefine the 410 response as > meaning anything about the status of the source file. The 410 > representation is merely a status code informing the client that the > resource "used to exist". Okay, that's what I was questioning: at some point in the discussion, the 404/410 got reversed, and I was trying to figure out why you'd want to go 200->404->410. 200->410->404 makes more sense (though I'm still curious as to why, on a practical level, you need the distinction). > Which makes my analogy fail why, exactly? DELETE imposes no > requirement on server behavior. As the server implementer I am free > to move the source file, delete the source file, "mark" the source > file or implement some sort of lookup table bearing no relation to > the source file, and I am free to decide whether to use 404, 410 or > both over time to indicate a successful DELETE. I mean that the "mark" isn't really a system-level deletion, but a fiction imposed by Windows Explorer or whomever. (I could be wrong; I haven't groveled in the Windows filesystem and hope never to have to. But that's what it looks like to me.) > Sorry, but if your PUT results in a subsequent GET returning a 404 > or a 410 response then you have still broken the uniform interface > constraint of REST by defining PUT as "move" or "remove" instead of > "replace". I'm not sure I agree. Otherwise, you have to deal with the whole expiry issue I mentioned, and all of the similar cases where PUT (or even POST) can directly result in a 4xx response. Perhaps you changed the access level, and now unauthorized people get a 404 because the server doesn't want to admit the existence of a top-secret file - that still doesn't mean REST is broken if you don't use a DELETE to get it there. Indeed, you could look at the deletion flag as an authorization issue: if the resource is merely flagged for deletion, it might well return 410 or 404 for unauthorized people, but administrators could still see, and thus choose to undelete, the resource. I'm not saying the way you're wanting to do it isn't workable, and I don't think I'm qualified to judge if it's RESTful or not. I'm just saying it isn't the only way to do it (Perl programmers are like that).
> >You are not supposed to return a redirect on subsequent GET, only >in response to DELETE. > Either way changes the semantics of DELETE to "move" instead of "remove". > >That's fine. You don't have to have a trashcan. You just have to >expose the deletedness of a resource as a separate resource. > Sorry, I think you're off in the weeds here. If I want to express that a resource no longer exists on the server, I have a choice between responding with a 404 or a 410 response. I see no reason to move any files to a temporary URL, which if the file is undeleted, needs to change to a redirect, increasing URL proliferation and application complexity for no reason I can discern from RFC 2616 or REST. -Eric
* Eric J. Bowman <eric@...> [2007-07-07 00:05]:
> The only time I should get a 3xx response is if the resource I
> am requesting the DELETE on has been moved, otherwise the
> semantics of DELETE are changed from "remove" to "move" even if
> it's "move to trashcan".
And the problem with that is what exactly?
FWIW, reading RFC 2616 §14.30, it seems a Location header can be
given with any response, so I suppose you could instead say
> DELETE /foo/bar
< 204 No Content
< Location: /deleted/foo/bar
Or heck, hypermedia-driven app state:
> DELETE /foo/bar
< 200 OK
< Content-Type: application/vnd.exampleorg.deleted+xml
<
< <purge href="/deleted/foo/bar"/>
The fact remains that this design does not have problems with
idempotency nor does it require as much hardwired knowledge
about the specific semantics of your protocol from clients
(aka REST-RPC hybrid).
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote: > > Except that JSON interoperates and requires a fraction of the > resources of an XML parser, much less an entire SOAP stack. There > are only a few, universally understood atomic data types and a > trivially simple grammar. > Too simple, in fact. I've recently realized that the JSON grammar is critically underspecified, and if a Yellow-headed Blackbird hadn't just shown up at Jones Beach I might have already published an article explaining how. :-) Nonetheless, look for details on The Cafes in a week or two. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
"Eric J. Bowman" <eric@...> writes: >> >>You are not supposed to return a redirect on subsequent GET, only >>in response to DELETE. >> > > Either way changes the semantics of DELETE to "move" instead of "remove". You are dictating the server's behaviour. It's entirely up to the server to move, remove, copy or do whatever else it fancies on the DELETE-ed resource as long as client can no longer GET it using the same URL. The semantic of DELETE is still the same in Aristotle's example: the resources is no longer accessible under that URL. DELETE in his example effectively deletes the resource identified by the URL such that you can't get it anymore with subsequent GET. To dictate what actually must be done by the server on the DELETE-ed resource is not reasonable because then one can demand that a DELETE must mean hard delete as in destruction of information, which may require judicious use of thermite on the server. YS.
> >Nope. 404 means no representation is available. > Nope, a 404 response is a representation of the resource's unavailability. > >Please define "undefined resource". > "A resource is a conceptual mapping to a set of entities..." The default response of any naming authority is a 404 error, right? That's because no conceptual mapping exists to any set of entities, i.e. the resource is undefined by the naming authority (server). Once a conceptual mapping exists, the resource has been defined (by its conceptual mapping). This is a precondition for sending a 410 response -- the resource must have been defined by the naming authority, at some point in time. If the resource still exists, but has been moved, the response could be a 30x to redirect the client. But, does deleting a resource delete the conceptual mapping that was already established? Not necessarily, the 410 response in no way indicates that the resource _wasn't_ moved, the case may be that the resource was moved to a new domain on a new ISP while the old ISP is no longer under contract to maintain the forwarding. The 4xx response codes imply nothing about the existence of a resource. An undefined resource can only respond 404 to a GET request, which in no way precludes a defined resource from responding 404. The 404 and 410 response codes only indicate _that_ the request has failed, not _why_. > >Sorry, I think you're reading things into RFC2616 which just aren't >there. 410 is a special case of 404, "not found". That's all. > Sorry, but RFC 2616 clearly states that the resource may still exist regardless of a 404 or a 410 response. Those status codes just mean the request failed, and I think you're the one reading stuff in beyond that if you are saying that I can't make anything but a GET or a PUT request if the response code is 404 or 410 because those somehow imply that the resource does not exist. > >404 is different from 200 + empty body, just like an empty file is >different from an absent file. > How is an empty file different from an absent file, in terms of REST, which could care less about how a resource is generated? The server is opaque from the client perspective, a client receiving a 404 or a 410 response can make no assumptions about whether a file exists or does not exist on the server, or used to exist, or won't exist again at some point in the future. The 404 and 410 responses simply mean the request failed. > >At this point I'm really confused about the whole point of this >discussion. Are you proposing this as a general solution? To what problem? > I merely posted my solution to a problem I had, for the sake of discussion and as an example of how DELETE doesn't only have to mean "remove a file from the server". Imagine my surprise when I am told that I can't use a 410 Gone after a DELETE, or that I can't send a DELETE message to a resource just because it is responding 410 Gone, or that I don't understand the RFCs, or etc. etc. etc. which comes across as a pissing match instead of the help I was looking for. Which, by the way, I already thanked Aristotle for and we are changing the if-none-match on the first DELETE to an if-match on the second DELETE. Beyond that, I was hardly expecting anyone to respond, "Hey, that's pretty neat" because I have grown accustomed to any unorthodox solution I present being attacked from all sides. Yet nobody ever specifically states what is un-RESTful about my implementation or why they think that, they only seem to make statements about how I *must* be doing something wrong or quoting RFC 2616 to me even though that isn't where REST is defined. So as to the question of what is the point of this discussion, I never really know, for the most part. But overall, if the takeaway is that I or someone else lurking here learns something, then I'm pretty happy. I am willing to both teach what I know of REST and learn what I do not know of REST. At this point, I'm convinced that I have a perfectly RESTful implementation, and I am not afraid to defend it from those who do not understand or explain it to those who are trying to understand, or learn from those who can explain to me in terms of REST _why_ what I am doing is "wrong" in specific terms of REST constraints or standards violation -- and convince me that their own interpretation is more correct than mine. -Eric
* Eric J. Bowman <eric@...> [2007-07-07 00:10]: > Sorry, I think you're off in the weeds here. If I want to > express that a resource no longer exists on the server, I > have a choice between responding with a 404 or a 410 response. > I see no reason to move any files to a temporary URL, Why do you keep thinking in terms of files? Files are irrelevant. What does moving a file to a URI even mean? You’re exposing your knowledge of the previous existence of a resource as a separate resource. I admit that responding with a redirect is the wrong answer. It took me a few iterations to get to a 200 response to a DELETE with a link in the entity-body, but that is the right approach. The server responds to the DELETE by saying “OK, it’s gone; here’s a description of how you can also delete my memory of its previous existence by deleting the following resource.” The resource and the server’s memory of its existence are two separate Things, and should be exposed separately. > which if the file is undeleted, needs to change to a redirect, > increasing URL proliferation and application complexity for no > reason I can discern from RFC 2616 or REST. Your double-DELETE is a very surprising interpretation of what RFC 2616 allows. You are effectively overloading the meaning of 410 in a way that the uniformity constraint reserves for the entity body. You seem to be following the WebDAV school of HTTP which considers resources somehow equivalent to files on the server’s disk and prefers to model additional aspects of state by putting them into the method instead of exposing them as resources. URI proliferation is about multiple names for the same resource. I don’t understand how this applies to my proposed protocol design. From a REST point of view, URI starvation (where you don’t expose sufficient state as separate resources and overload aspects of other messages instead) is worse. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> >You are dictating the server's behaviour. It's entirely up to the >server to move, remove, copy or do whatever else it fancies on the >DELETE-ed resource as long as client can no longer GET it using the >same URL. > Sorry, but it's REST that dictates the server's behavior. The line in RFC 2616 about the server can do anything is not relevant because HTTP is not REST. If I request a DELETE, then the response code from that request should tell me success (2xx) or failure (4xx) to my request that the resource be removed from the server. > >The semantic of DELETE is still the same in Aristotle's example: the >resources is no longer accessible under that URL. DELETE in his >example effectively deletes the resource identified by the URL such >that you can't get it anymore with subsequent GET. > You guys may have a point, that the result of the DELETE action itself can be a 30x redirect, but that can't be the _response_ to the DELETE request itself, only subsequent GET requests. The only time the response to a DELETE request being a 30x redirect _might_ be valid, is to indicate to the client to re-try the DELETE on that other resource, like a 30x response to a PUT request. -Eric
* Eric J. Bowman <eric@...> [2007-07-07 00:45]: > >Nope. 404 means no representation is available. > > Nope, a 404 response is a representation of the resource's > unavailability. You seem to be confusing the status code with the entity body. > Yet nobody ever specifically states what is un-RESTful about my > implementation or why they think that, they only seem to make > statements about how I *must* be doing something wrong or > quoting RFC 2616 to me even though that isn't where REST is > defined. I did. Your design breaks uniformity to avoid hypermedia: it overloads the meaning of a response code in order to avoid exposing server state as a separate resource. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Eric J. Bowman <eric@...> [2007-07-07 00:55]: > Sorry, but it's REST that dictates the server's behavior. Oh dear. > The only time the response to a DELETE request being a 30x > redirect _might_ be valid, is to indicate to the client to > re-try the DELETE on that other resource, like a 30x response > to a PUT request. Yes, I recanted very quickly on the redirect. That was wrong. The right response is 200 OK, with hypermedia used to name the resource representing the server’s memory of the prior existence of the connection between the URI and the resource. (I didn’t simply say “prior existence of the deleted resource” because you insist that a URI and a resource are not the same thing.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> >But actually, I meant explain what the distinction was in your app. >That is, your reasoning for needing both 404 and 410 - what it does >for the client. > I think the question is, what is HTTP's reasoning for having both the 404 and the 410 response codes? I am merely following a SHOULD directive in the standard -- it isn't just my implementation that's unorthodox, the whole notion of using 410 responses and the DELETE method is pretty unorthodox at this point in time as nobody really uses either, or the DELETE implementation is entirely pedestrian in nature when it is used. I am aiming for a richer application which makes the fullest use of HTTP possible. So my reasoning is the same as why I don't settle for CMS apps which indicate a 404 error with a custom page and a 200 OK response -- HTTP is a richer protocol than that and there just isn't any reason not to use any response code which is appropriate for the interaction. Personally, I would like to see 410 responses instead of 404 responses for URLs that no longer exist (or preferably, redirects) even though I know the resources they used to identify still do exist, and the only change has been the site's CMS and therefore its URI allocation. Either set up forwarding, or mark all the old URLs 410 Gone, please. Why? Because it's more appropriate. As the server administrator, I am not satisfied making the reaction to DELETE always equal 410 Gone, because sometimes a DELETE is made because of a misspelt URL on a PUT request. In which case I, as the system administrator, would like a simple HTTP-based solution to change that 410 back into the 404 it was never intended *not* to be -- instead of needing to ssh into the server and interact with the filesystem or directly with the database -- that's a whole lotta hassle to fix a typo, so why not just DELETE the 410 Gone response? I thought. > >I'm not sure I agree. Otherwise, you have to deal with the whole >expiry issue I mentioned, and all of the similar cases where PUT (or >even POST) can directly result in a 4xx response. Perhaps you changed >the access level, and now unauthorized people get a 404 because the >server doesn't want to admit the existence of a top-secret file - that >still doesn't mean REST is broken if you don't use a DELETE to get it >there. Indeed, you could look at the deletion flag as an authorization >issue: if the resource is merely flagged for deletion, it might well >return 410 or 404 for unauthorized people, but administrators could >still see, and thus choose to undelete, the resource. > OK, but the original suggestion was to PUT a zero-byte file, which is a direct "remove" operation on my implementation, so in relation to my implementation that would not be a RESTful solution. > >I'm not saying the way you're wanting to do it isn't workable, and I >don't think I'm qualified to judge if it's RESTful or not. I'm just >saying it isn't the only way to do it (Perl programmers are like >that). > Right, and I didn't mean to slam the notion of an expiration date, or the other valid cases you mention -- only to explain why it would not be RESTful for a zero-byte PUT to function as a DELETE. Of course, that is qualified as "in my implementation" as well, although I am pretty sure that it would go for any conceivable setup, but I won't make that leap, because I allow for others to think outside my box. I only deem anything "un-RESTful" if I can specifically explain what REST constraint in particular has been broken, and how, and why it should not be. For example, any application that uses PUT to impose "merge" semantics is, by my thinking, inherently un-RESTful because the uniform interface constraint requires PUT to mean replace, and I've fully detailed my position. Nobody has to be a 100% expert on REST before being able to point out a broken constraint, the only requirement is an understanding of the constraint in question, and a willingness to accept that you're wrong and learn from it. -Eric
* Nic James Ferrier <nferrier@...> [2007-06-05 21:25]:
> For example, here's a bit of Python code I've written just recently:
>
> return {"abbr":
> {"@class": "user",
> "@title": strip_openid_url(openid_profile.openid),
> "div":
> [{"span":
> {"@class": "nickname",
> "span": openid_profile.nick_name }},
> {"ul":
> {"@class": "mugshots",
> "div":
> [{"li":
> [{"img":
> {"@class": "mugshot",
> "@alt": mugshot.name,
> "@src": file_field_get_url(mugshot.shot) }},
> {"img":
> {"@class": "avatar",
> "@alt": mugshot.name,
> "@src": "/sitemedia/%s" % (mugshot.thumb)}}]} for mugshot in openid_profile.mugshot_set.all()]}}]}}
>
> pretty obvious what that is doing.
Yeah, that’s a good example of using JSON for something you
shouldn’t. You’re using Python, so use Genshi. With Javascript
I’d use E4X.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
> >> The only time I should get a 3xx response is if the resource I >> am requesting the DELETE on has been moved, otherwise the >> semantics of DELETE are changed from "remove" to "move" even if >> it's "move to trashcan". > >And the problem with that is what exactly? > Breaking the generic interface constraint. If the semantics of the interaction are "move resource A to URL B" then resource A should be PUT to URL B, then URL A deleted or set to redirect to URL B. This "move resource A to URL B" interaction semantic can also be modeled using WebDAV, which includes a MOVE method, IIRC. If you want to implement interaction semantics for which no method exists in HTTP, then you use POST -- but the result is more like an RPC method where the POSTed entity contains instructions on how to move resource A to URL B, then delete (or redirect) URL A. So, to meet the REST constraint for the generic connector, either use a method which reflects the actual semantics of the interaction as a whole (perhaps by using a protocol which already includes a MOVE method), or devise a means to achieve the same result using the methods described by the protocol in use, constrained to the semantics of a REST connector. > >FWIW, reading RFC 2616 §14.30, it seems a Location header can be >given with any response, so I suppose you could instead say > Again, HTTP is not REST. Just because RFC 2616 allows a Location header to be sent with any response doesn't mean that response won't break the Uniform Interface constraint of REST. You have to look at the semantics of the interaction taken as a whole, if those semantics are "move" then implement it with MOVE using WebDAV, or if you are limited to using HTTP then make it a two-step operation using PUT and DELETE instead of tunneling "move" semantics through DELETE or even POST. > >The fact remains that this design does not have problems with >idempotency nor does it require as much hardwired knowledge >about the specific semantics of your protocol from clients >(aka REST-RPC hybrid). > But it does require the server connector to understand the semantics of DELETE to mean something other than "remove" in addition to meaning "remove" depending on the URL the DELETE request is sent to, or some other shared knowledge between client and server. In my setup, the response to a DELETE request is straightforward -- the status of the resource changes to reflect the request, no matter what client is making the DELETE. The optional, sysadmin-only second DELETE does require knowledge of the specific protocol in that it must have an If-Match header, but the semantics of making such a DELETE request are still "remove", which doesn't break the Uniform Interface constraint. -Eric
> >Why do you keep thinking in terms of files? Files are irrelevant. >What does moving a file to a URI even mean? > Sorry, I meant to use the word "source", which could be a file, or it could be a database cell, or a combination of both, or something else entirely. But if you are changing the identifier of that source, you are making it a new resource, which is either a MOVE or a COPY. All I am doing is flagging the source as having been removed, no new URL required. > >You're exposing your knowledge of the previous existence of a >resource as a separate resource. > I suppose you could do it that way if you wanted and could make it RESTful, but I am merely representing the resource as having been removed, not assigning it a new identifier that must be interpreted as having the same meaning as a 4xx response even though it's giving a 200 OK response. How, by dereferencing the URL which includes "/trash/" in its path, does the server convey to me that the file has been removed? If I must infer this from the URL then I'm forgetting that URLs are opaque. > >I admit that responding with a redirect is the wrong answer. It >took me a few iterations to get to a 200 response to a DELETE >with a link in the entity-body, but that is the right approach. > I agree that a link in a 200 or 204 response is better than a redirect, but I still believe such a response breaks the Universal Interface constraint by tunneling "move" through "remove" and that a PUT followed by a DELETE is a RESTful, RFC 2616-based solution. > >The server responds to the DELETE by saying “OK, it’s gone; >here’s a description of how you can also delete my memory of >its previous existence by deleting the following resource.†> You're saying that sending the client a different URL that also needs deletion to fully remove the resource, is superior than sending two DELETE requests to the same URL to achieve the same thing. I'm still not seeing the need for the added complexity of executing a MOVE as part of a DELETE request, and I still don't understand how changing an URL to reflect state in the path segment is understandable by intermediaries who only interpret 410 Gone as meaning "removed", not a 200 OK response from a different URL that includes "/trash/" in the path. Once you've assigned a "deleted" URL to the resource, you now have two identifiers for the same resource. In and of itself, this is not a problem, except that each URL gives a different representation of resource state (one is 4xx, the other 200). Which one is authoritative about the resource state being "removed", the 404 or 410 response, or the 200 OK response? Wouldn't this confuse user-agents, and users? > >The resource and the server’s memory of its existence are two >separate Things, and should be exposed separately. > I'm sorry, but I see all of this as simply changing the state of one resource. First, it exists. Then, it is gone. Then, as an option, it was never there. But none of this implies that the server has forgotten, or should have forgotten, about the resource. I am merely altering the response to requests for one resource, to reflect the current state of that resource, by returning a status code. > >Your double-DELETE is a very surprising interpretation of what >RFC 2616 allows. You are effectively overloading the meaning of >410 in a way that the uniformity constraint reserves for the >entity body. > ??? The Uniform Interface constraint pertains to request methods and their corresponding response codes. Where does RFC 2616 tell me that the server can no longer accept requests once a resource has had its status changed to 410 Gone? And where does anything say that response codes apply to the entity body? They convey the status of the resource, but the response may contain both resource headers and entity headers. RFC 2616 clearly allows a DELETE to change the status of a resource to either 404 or 410, this is exactly what the Uniform Interface constraint means. There is no restriction in either RFC 2616 or REST which states that the resource must respond 200 OK before a DELETE request may be accepted. In fact, RFC 2616 clearly states that a resource responding 404 or 410 can still exist -- it may just be a matter of privilege level, where authorization is required before a GET will respond 200 OK. > >You seem to be following the WebDAV school of HTTP which >considers resources somehow equivalent to files on the server’s >disk and prefers to model additional aspects of state by putting >them into the method instead of exposing them as resources. > That characterization couldn't be further from my view of things. We are discussing a situation where there is both a resource, and a source, I use the term "source file" because that is exactly what I am discussing in this thread -- my implementation, which in this case is using a file. I used the example I used because too many people are claiming that the deletion of a resource must result in the deletion of the source. So it is just a narrative convenience to speak in terms of a DELETE only changing the status of the resource without touching the source "file", because only the resource mapping gets deleted -- or rather, has its status changed to express a "removed" state to the requesting client. My application uses one URL and content negotiation to serve four different "text/html" representations and three "application/xhtml+xml" representations (plus one Atom and one PDF) depending on client capability, so I would have to say that I am keenly aware of the separation between the HTTP resource/representation model and the file-centric models of FTP and WebDAV. > >URI proliferation is about multiple names for the same resource. >I don’t understand how this applies to my proposed protocol >design. From a REST point of view, URI starvation (where you >don’t expose sufficient state as separate resources and overload >aspects of other messages instead) is worse. > I have to disagree, there. If I have one resource which has a variety of possible states, then I want the response to a request for that resource to reflect the current state of the resource, in the retrieved representation -- either as an entity body or as a control code. I do not want to change the semantics of the mapping of my resource when state changes, i.e. the resource is a conceptual mapping that does not include any information about the resource state, that can only be conveyed in a representation of the resource, not by deducing the meaning of the assigned URL. I name my resource "a thing", not "an existing thing" or "a deleted thing" depending on its state. The HTTP response indicates state, not the resource identifier. If I have some other semantic mapping for the same resource, then I assign it a new URL, i.e. sometimes I want to describe the "thing of the day". Under your method, that would need to be changed to "the deleted thing of the day" if someone deletes "a thing" at the wrong time. -Eric
> >> Nope, a 404 response is a representation of the resource's >> unavailability. > >You seem to be confusing the status code with the entity body. > How so? A response to a request for a resource includes a representation and optionally, an entity body. The representation indicates the state of the resource, which may be an error response: "Depending on the message control data, a given representation may indicate the current state of the requested resource, the desired state for the requested resource, or the value of some other resource, such as a representation of the input data within a client's query form, or a representation of some error condition for a response." A 404 response with no entity body is still a representation of the (possible nonexistence of the) requested resource. > >> Yet nobody ever specifically states what is un-RESTful about my >> implementation or why they think that, they only seem to make >> statements about how I *must* be doing something wrong or >> quoting RFC 2616 to me even though that isn't where REST is >> defined. > >I did. Your design breaks uniformity to avoid hypermedia: it >overloads the meaning of a response code in order to avoid >exposing server state as a separate resource. > I appreciate how you phrased your objection, however, I must still disagree because I'm not sure where you're coming up with the requirement for application state to be part of the (opaque) resource identifier. -Eric
> >> Sorry, but it's REST that dictates the server's behavior. > >Oh dear. > Or, to phrase that better. I can do whatever I want with the origin server, but REST constrains the behavior of the _connector_ that origin server uses to communicate with the outside world. > >(I didn’t simply say “prior existence of the deleted resource†>because you insist that a URI and a resource are not the same >thing.) > But they aren't the same thing. A resource is a conceptual mapping, which may have one or more identifiers at a given point in time. The resource is still "a thing" even if one of its identifiers refers to "the thing of the day", because the URLs are each a separate identifier. Even if their semantics identify the same resource at a given moment, the "thing of the day" URL is not the same thing as "a thing". -Eric
On 7/6/07, Eric J. Bowman <eric@...> wrote: > I think the question is, what is HTTP's reasoning for having both > the 404 and the 410 response codes? I am merely following a > SHOULD directive in the standard -- it isn't just my implementation > that's unorthodox, the whole notion of using 410 responses and > the DELETE method is pretty unorthodox at this point in time as > nobody really uses either, or the DELETE implementation is entirely > pedestrian in nature when it is used. I am aiming for a richer > application which makes the fullest use of HTTP possible. No, I know why HTTP has different response codes. I was just curious what that meant in your specific case... why your case used the conditions that lead to 404 and 410 and such. > OK, but the original suggestion was to PUT a zero-byte file, which is > a direct "remove" operation on my implementation, so in relation to > my implementation that would not be a RESTful solution. Yeah, I don't like that suggestion just because a zero-byte file feels more like a mistake. Even if that's what the server ends up with. I don't, on the other hand, have an objection to a PUT resulting in a resource moving (as gets mentioned elsewhere in the thread). Sometimes that means that the "location" shouldn't have been part of the URL, but sometimes you just can't get around it. For instance, if you PUT a new username to a user's profile, Wirebird moves the profile. I'm certainly not going to let a client, even one with moderator access, effectively take the user off the system... what if something happens between then and the POST to the new username? > I only deem anything "un-RESTful" if I can specifically explain what > REST constraint in particular has been broken, and how, and why it > should not be. For example, any application that uses PUT to impose > "merge" semantics is, by my thinking, inherently un-RESTful because > the uniform interface constraint requires PUT to mean replace, and > I've fully detailed my position. Nobody has to be a 100% expert on I haven't found a merge situation that couldn't be solved by reframing the bits of the resource that are being merged into resources (represtations? I can never remember. Things What Have URLs, in any event) in their own right. Not saying it doesn't happen, but yeah.
* Eric J. Bowman <eric@...> [2007-07-07 03:10]: > >> Sorry, but it's REST that dictates the server's behavior. > > > >Oh dear. > > Or, to phrase that better. I can do whatever I want with the > origin server, but REST constrains the behavior of the > _connector_ that origin server uses to communicate with the > outside world. Citation please. I disagree that there is any part of REST which imposes any particular behaviour on the server. > >(I didn’t simply say “prior existence of the deleted resource” > >because you insist that a URI and a resource are not the same > >thing.) > > But they aren't the same thing. [snip defense of the statement] Which is why phrased it in the more complicated way, even though for the purposes of our discussion the complicated phrasing provided no further insight. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> >Yeah, I don't like that suggestion just because a zero-byte file feels >more like a mistake. Even if that's what the server ends up with. > Well, we really intend for stuff that's removed to be gone for good, may as well free up some resources. It's really easy to do a case switch based on whether filesize is zero, i.e. false. That particular was suggested by one of my coders to reduce the amount of code, and I'm always in favor of using less code to get the job done. > >I don't, on the other hand, have an objection to a PUT resulting in a >resource moving (as gets mentioned elsewhere in the thread). Sometimes >that means that the "location" shouldn't have been part of the URL, >but sometimes you just can't get around it. For instance, if you PUT a >new username to a user's profile, Wirebird moves the profile. I'm >certainly not going to let a client, even one with moderator access, >effectively take the user off the system... what if something happens >between then and the POST to the new username? > I don't have enough information about Wirebird to make any firm judgments, but the interaction you describe sounds un-RESTful to me. If you're the server administrator, then disallow any PUT that would break the system by returning a 400 Bad Request or a 403 Forbidden (both indicate "do not repeat" without somehow altering the request, i.e. an auth header) instead of assigning non-generic semantics to PUT. Or, redirect the user to the appropriate URL so they can repeat the request. To answer your "what if something happens" question, my response is "That's why you don't use PUT if the interaction semantics are move." If a "move" has occurred, then the use case for PUT was not met and it's time for another design iteration (yes, I know, these are indeed frustrating but a broken constraint is a broken constraint). Take one resource, "thing of the day". It is a 307 redirect to "a thing" one day, and "another thing" the next day. If I want to PUT a new version of "a thing" and I make that PUT to "thing of the day" I should be redirected to "a thing" and required to repeat the request there. To change "thing of the day" I'd POST the URL for "another thing" to "thing of the day" and the server would change its Location header for that resource accordingly, as it always reflects the last URL pushed onto that stack (appended to the resource). URL A = "a thing" URL B = "thing of the day" So it's perfectly allowable for a PUT to one URL to affect the contents of another URL through an alias. Because URL B is always a redirect, it is always in sync with changes to URL A. But, it is not RESTful to allow the PUT to "thing of the day" to update "a thing" because the semantics of such an interaction would be "move": PUT the representation to URL B, move the representation to URL A, and replace the representation of URL B the user *just requested be PUT there* with the 307 redirect that the user *just tried to replace* with a new representation of URL B. This isn't just breaking a REST constraint, this goes against the spec for PUT: "In contrast, the URI in a PUT request identifies the entity enclosed with the request -- the user agent knows what URI is intended and the server MUST NOT attempt to apply the request to some other resource." The user asked for the file to be replaced, not moved -- i.e. to apply the change to URL A and automatically switch URL B back to a redirect is directly against the HTTP protocol (which doesn't stop anyone from implementing it wrong), not to mention the user intent. So the correct response would be a 307 redirect to URL A, with the client retaining the option (possibly with user intervention) of whether to re-try the request. Otherwise, the semantics of the mapping of URL B have varied -- first, they meant "for now, see URL A" then they meant "200 OK, so this has nothing to do with URL A" then they changed back to "for now, see URL A" and you're exactly right, this causes all sorts of problems, but I say that's because PUT has been given the semantics of "move". Remember, "The only thing that is required to be static for a resource is the semantics of the mapping, since the semantics is what distinguishes one resource from another." So if you have a resource whose semantics are "thing of the day" implemented with a 307 redirect, that 307 redirect had better not vary its target throughout the day, or become a 200 OK while the "move" is pending (as this will allow caches to mark the 307 response as stale before its stated expiration time of 24h) depending on how many times "a thing" gets updated by using PUT to "move". If the PUT was successful, then the 20x response lied, as the move happened so fast that to the client, the 307 response was never updated beyond getting marked "stale" in every intermediary including the client cache. Kinda like anti-scaling. ;-) -Eric
> >> Or, to phrase that better. I can do whatever I want with the >> origin server, but REST constrains the behavior of the >> _connector_ that origin server uses to communicate with the >> outside world. > >Citation please. I disagree that there is any part of REST which >imposes any particular behaviour on the server. > "Server" means both the origin server component, and the server connector. REST dictates that the server connector meets the generic interface constraint, meaning the server connector must behave in a very specific way that may be generically understood by clients as well as intermediary caches. REST also dictates that the origin server manage the namespace in such a way that the semantics of the mappings are static. "An origin server uses a server connector to govern the namespace for a requested resource. It is the definitive source for representations of its resources and must be the ultimate recipient of any request that intends to modify the value of its resources. Each origin server provides a generic interface to its services as a resource hierarchy. The resource implementation details are hidden behind the interface." I would say that the failure to impose either of two behaviors on the server breaks REST. One, REST requires that the semantics of the mappings on the server do not change, and two, REST requires that the server connector use a generic interface. Those two implementation details can't be hidden behind the interface, they _are_ the interface. There are no constraints as to how an application can achieve these two required behaviors, as those details remain hidden. -Eric
On 7/6/07, Eric J. Bowman <eric@...> wrote: > Well, we really intend for stuff that's removed to be gone for good, > may as well free up some resources. It's really easy to do a case > switch based on whether filesize is zero, i.e. false. That particular > was suggested by one of my coders to reduce the amount of code, and > I'm always in favor of using less code to get the job done. Clarification: I don't have any objections to the zero-byte file. It's the zero-byte PUT that disturbs me, perhaps irrationally. "Did they mean to delete this resource, or did the body of the PUT get misplaced due to an error?" > I don't have enough information about Wirebird to make any firm > judgments, but the interaction you describe sounds un-RESTful to me. > If you're the server administrator, then disallow any PUT that would > break the system by returning a 400 Bad Request or a 403 Forbidden > (both indicate "do not repeat" without somehow altering the request, > i.e. an auth header) instead of assigning non-generic semantics to PUT. > Or, redirect the user to the appropriate URL so they can repeat the > request. You lost me there. If they want to replace the username with something invalid, sure, they get one of a variety of 4xx responses depending on what the problem is. No issues there. > To answer your "what if something happens" question, my response is > "That's why you don't use PUT if the interaction semantics are move." But that's what I'm saying: the PUT works *best* since you're not leaving the server going "Okay, I got rid of the old version... now what? Hello?" if your client falls off the net partway through a move done with DELETE+POST. Wasn't that what you were suggesting? > If the PUT was successful, then the 20x response lied, as the move > happened so fast that to the client, the 307 response was never updated > beyond getting marked "stale" in every intermediary including the client > cache. Kinda like anti-scaling. ;-) Hmm. I think I'm going to have to work through this example a bit more before I give a real response, since I'm not quite seeing it as the same thing on a first read.
Incidentally, Eric, I don't know if it's just me, but whenever I forget to take your address off a list reply, your mail server bounces the direct version back to me as spam. A body could get to taking that personal-like.
> >Incidentally, Eric, I don't know if it's just me, but whenever I >forget to take your address off a list reply, your mail server bounces >the direct version back to me as spam. A body could get to taking that >personal-like. > Sorry about that, again, everyone. One of these days I will be changing mail providers. Karen, I'd have added you to my whitelist, except I have no way of seeing your e-mail address, because Mailsnare pretty much deletes everything from rest-discuss. If Yahoo didn't keep deleting me as a result, then I'd be able to log in to my Yahoo and see your e-mail address when I use a web browser to read rest-discuss. So I'm pretty much stuck with making a wholesale change to my e-mail just because a couple of mail lists don't work with it, or not participating here, or trying to participate here and ticking people off. Darn Internet. Try sending me an e-mail directly, anyone who does gets on my whitelist. Messages from the list will still bounce and I won't see them, but it would clear up the problem you mention. -Eric
A. Pagaltzis wrote: > ... > I admit that responding with a redirect is the wrong answer. It > took me a few iterations to get to a 200 response to a DELETE > with a link in the entity-body, but that is the right approach. > ... Right. > ... > You seem to be following the WebDAV school of HTTP which > considers resources somehow equivalent to files on the server’s > disk and prefers to model additional aspects of state by putting > them into the method instead of exposing them as resources. > ... Please stop the WebDAV bashing. WebDAV does indeed use new methods where new resources may have worked as well, but that's a completely separate argument :-). > URI proliferation is about multiple names for the same resource. > I don’t understand how this applies to my proposed protocol > design. From a REST point of view, URI starvation (where you > don’t expose sufficient state as separate resources and overload > aspects of other messages instead) is worse. Yes. I'm still not sure what the two-stage delete is good for in practice, but if you want to be able to undo the first one, you better implement it internally by assigning a URI, and then allow to MOVE (gasp!) it back. Best regards, Julian
Eric J. Bowman wrote: > >Nope. 404 means no representation is available. > > > > Nope, a 404 response is a representation of the resource's unavailability. I don't think RFC2616 supports that point of view. > >Please define "undefined resource". > > > > "A resource is a conceptual mapping to a set of entities..." Well, that's Rest, not HTTP. I don't think it will work well to defend a funny design using HTTP's 404/410 distinction with definitions from Rest. Unless you can demonstrate the equivalent of 404/410 in Rest, of course. > The default response of any naming authority is a 404 error, right? > That's because no conceptual mapping exists to any set of entities, > i.e. the resource is undefined by the naming authority (server). Once > a conceptual mapping exists, the resource has been defined (by its > conceptual mapping). This is a precondition for sending a 410 > response -- the resource must have been defined by the naming authority, > at some point in time. OK. > If the resource still exists, but has been moved, the response could be > a 30x to redirect the client. But, does deleting a resource delete the > conceptual mapping that was already established? Not necessarily, the > 410 response in no way indicates that the resource _wasn't_ moved, the > case may be that the resource was moved to a new domain on a new ISP > while the old ISP is no longer under contract to maintain the forwarding. Yes. > The 4xx response codes imply nothing about the existence of a resource. > An undefined resource can only respond 404 to a GET request, which in no > way precludes a defined resource from responding 404. The 404 and 410 > response codes only indicate _that_ the request has failed, not _why_. > > > > >Sorry, I think you're reading things into RFC2616 which just aren't > >there. 410 is a special case of 404, "not found". That's all. > > > > Sorry, but RFC 2616 clearly states that the resource may still exist You're saying "clearly" here, so you can surely quote the spec where it supports that? (I'm not completely opposed to that view, but I sure do not think that RFC2616 is "clear" about that :-). > regardless of a 404 or a 410 response. Those status codes just mean > the request failed, and I think you're the one reading stuff in > beyond that if you are saying that I can't make anything but a GET or > a PUT request if the response code is 404 or 410 because those somehow > imply that the resource does not exist. > > > > >404 is different from 200 + empty body, just like an empty file is > >different from an absent file. > > > > How is an empty file different from an absent file, in terms of REST, > which could care less about how a resource is generated? The server Even an empty file can have entity headers, such as Content-Type, which is part of the representation you obtain with GET/HEAD. > is opaque from the client perspective, a client receiving a 404 or a > 410 response can make no assumptions about whether a file exists or > does not exist on the server, or used to exist, or won't exist again > at some point in the future. The 404 and 410 responses simply mean > the request failed. Nope. The point of 410 is to tell the client that there was a representation, but it is gone forever. How can you say the client can make no assumptions in this case, like the fact that retrying on the same URI later is useless? For 404 the situation is less clear, granted. > ... Best regards, Julian
> >> >Nope. 404 means no representation is available. > >> Nope, a 404 response is a representation of the resource's unavailability. > >I don't think RFC2616 supports that point of view. > Who's talking about RFC 2616? REST defines the response to a request as a representation of a resource, even if that representation is an error message, or no conceptual mapping to that resource has been defined. > >Well, that's Rest, not HTTP. I don't think it will work well to defend a >funny design using HTTP's 404/410 distinction with definitions from >Rest. Unless you can demonstrate the equivalent of 404/410 in Rest, of >course. > You require a 410 response to indicate a permanent condition, but I think that means you're the one who needs to defend his "funny" position on that issue, not me. What do you mean by "demonstrate the equivalent of 404/410 in REST"? If I wish to indicate that a resource is unavailable, either permanently or temporarily, and my distributed hypermedia application uses HTTP as its protocol, then the definitions of the 400, 403, 404 and 410 response codes are used exactly as they are unambiguously described in RFC 2616, at my discretion as the application developer provided that such use doesn't break the Uniform Interface constraint of REST. In terms of a generic REST connector, if my protocol were FTP then I would use 4yz as the response to the same request, if that's what you're after? Or 5yz if the request is not to be repeated. But, 404 and 410 each map to 4yz "transient negative completion reply" not 5yz "permanent negative completion reply" in RFC 765. HTTP's 400 and 403 responses map to 5yz "permanent negative completion reply" in FTP. > >You're saying "clearly" here, so you can surely quote the spec where it >supports that? (I'm not completely opposed to that view, but I sure do >not think that RFC2616 is "clear" about that :-). > "It is not necessary to mark all permanently unavailable resources as 'gone' or to keep the mark for any length of time -- that is left to the discretion of the server owner." (RFC 2616, 10.4.11) Meaning it is up to my discretion to change a 410 response to a 404 response, or even PUT back what used to be there at some point in the future and respond 200 OK again. Clients must not assume that 4yz indicates a permanent condition in FTP, or 404 or 410 in HTTP. The only thing a client should do, if it has link-editing capability, is delete links to the 410 resource -- whatever that means, perhaps the browser should delete a bookmark if it responds 410? > >The point of 410 is to tell the client that there was a representation, >but it is gone forever. How can you say the client can make no >assumptions in this case, like the fact that retrying on the same URI >later is useless? > Because the standards say the client can't make that assumption -- just because retrying the same request again and again for a year yields the same result is no reason to assume the resource won't be re-established the next day, simply because RFC 2616 leaves that up to the discretion of the server owner. The 413 response explicitly means "do not retry" unless there is a Retry-After header. Other than that, only the 403 comes with the expectation of permanence, other "permanent" conditions allow for the request to be modified and retried -- 404 and 410 do not indicate that the request must be modified as they are "transient" conditions. If you need the equivalence of FTP's 5yz semantics in HTTP then use a 403 response. -Eric
Eric J. Bowman wrote: > >> >Nope. 404 means no representation is available. > > > >> Nope, a 404 response is a representation of the resource's > unavailability. > > > >I don't think RFC2616 supports that point of view. > > > > Who's talking about RFC 2616? REST defines the response to a request as > a representation of a resource, even if that representation is an error > message, or no conceptual mapping to that resource has been defined. I am talking about RFC2616. You are using RFC2616. The status code in an HTTP/1.1 response is not part of the representation, but a status code. > >Well, that's Rest, not HTTP. I don't think it will work well to defend a > >funny design using HTTP's 404/410 distinction with definitions from > >Rest. Unless you can demonstrate the equivalent of 404/410 in Rest, of > >course. > > > > You require a 410 response to indicate a permanent condition, but I > think that means you're the one who needs to defend his "funny" position > on that issue, not me. > > What do you mean by "demonstrate the equivalent of 404/410 in REST"? > If I wish to indicate that a resource is unavailable, either permanently > or temporarily, and my distributed hypermedia application uses HTTP as > its protocol, then the definitions of the 400, 403, 404 and 410 response > codes are used exactly as they are unambiguously described in RFC 2616, > at my discretion as the application developer provided that such use > doesn't break the Uniform Interface constraint of REST. Well, if they'd be "unambiguously described", I don't think we would have this discussion. > ... > >You're saying "clearly" here, so you can surely quote the spec where it > >supports that? (I'm not completely opposed to that view, but I sure do > >not think that RFC2616 is "clear" about that :-). > > > > "It is not necessary to mark all permanently unavailable resources as > 'gone' or to keep the mark for any length of time -- that is left to the > discretion of the server owner." (RFC 2616, 10.4.11) OK, let's quote the spec, but completely: "The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent. Clients with link editing capabilities SHOULD delete references to the Request-URI after user approval. If the server does not know, or has no facility to determine, whether or not the condition is permanent, the status code 404 (Not Found) SHOULD be used instead. This response is cacheable unless indicated otherwise. The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote links to that resource be removed. Such an event is common for limited-time, promotional services and for resources belonging to individuals no longer working at the server's site. It is not necessary to mark all permanently unavailable resources as "gone" or to keep the mark for any length of time -- that is left to the discretion of the server owner." So: - the condition is expected to be permanent, and - it's not necessary to *keep* it marked "gone"; but that doesn't mean the intent is that it can go back to a 2xx. > ... Best regards, Julian
[ Attachment content not displayed ]
> > >> >Nope. 404 means no representation is available. > > > >> Nope, a 404 response is a representation of the resource's > unavailability. > > > >I don't think RFC2616 supports that point of view. > > > >> Who's talking about RFC 2616? REST defines the response to a request as >> a representation of a resource, even if that representation is an error >> message, or no conceptual mapping to that resource has been defined. > >I am talking about RFC2616. You are using RFC2616. The status code in an >HTTP/1.1 response is not part of the representation, but a status code. > But we are talking about the response, not the response code, a hint is where I use the phrase, "a 404 response is a representation" because I don't know how the server can send only a response code without sending headers, or on a GET response, an entity body. To me, the HTML I get back from the server with headers and a 404 response code saying "Not Found" is a representation. Are you here to discuss REST or nitpick semantics? > >- the condition is expected to be permanent, and > >- it's not necessary to *keep* it marked "gone"; but that doesn't mean >the intent is that it can go back to a 2xx. > Didn't you start off in this thread by telling me that I was violating RFC 2616 by setting a 410 to be a 404? Am I still wrong about that, and if not, would it hurt you to just once concede a point to me, in any thread on any list, instead of scouring my posts for obscure semantic inconsistencies (response vs. response code) to nitpick? To me, "the discretion of the server owner" means that it's my URL and I can respond to requests for it however I deem appropriate. There is no mention in the 410 description about the client repeating the request, and even the 403 response calls that a SHOULD NOT not a MUST NOT. So, are you sure that I MUST NOT change a 410 into a 200 at my discretion as the naming authority? Are you still insisting that a client, once it knows that a resource has responded 410, can safely expect that no resource will ever again be identified by that URL? And that I can't expect a client or intermediary to send a DELETE request to the origin server if it knows a resource has been marked 410 Gone? Or are you perhaps reading something into the spec on the 410 response which means that, although a 410 doesn't have to be permanent, it must remain some sort of 4xx response? Because I just don't see where the spec supports _any_ of your arguments. -Eric
Eric J. Bowman wrote: > > >> >Nope. 404 means no representation is available. > > > > > >> Nope, a 404 response is a representation of the resource's > > unavailability. > > > > > >I don't think RFC2616 supports that point of view. > > > > > > >> Who's talking about RFC 2616? REST defines the response to a request as > >> a representation of a resource, even if that representation is an error > >> message, or no conceptual mapping to that resource has been defined. > > > >I am talking about RFC2616. You are using RFC2616. The status code in an > >HTTP/1.1 response is not part of the representation, but a status code. > > > > But we are talking about the response, not the response code, a hint is > where I use the phrase, "a 404 response is a representation" because I > don't know how the server can send only a response code without sending > headers, or on a GET response, an entity body. To me, the HTML I get > back from the server with headers and a 404 response code saying "Not > Found" is a representation. Are you here to discuss REST or nitpick > semantics? We're obviously using the same terms, but with different semantics. In HTTP, you use GET to obtain a representation, and a 404 tells you that the server hasn't got one for you. Thus, in terms of RFC2616, the entity body carries an error message, which is *not* a representation of the resource. Consider a login form sent back with 401: is that a representation of the resource I tried to GET? > > > >- the condition is expected to be permanent, and > > > >- it's not necessary to *keep* it marked "gone"; but that doesn't mean > >the intent is that it can go back to a 2xx. > > > > Didn't you start off in this thread by telling me that I was violating > RFC 2616 by setting a 410 to be a 404? Am I still wrong about that, > and if not, would it hurt you to just once concede a point to me, in > any thread on any list, instead of scouring my posts for obscure > semantic inconsistencies (response vs. response code) to nitpick? Funny enough, the point here is to learn and exchange arguments; not to be right all the time. Yes, I may have thought that a 410 is more permanent that it may be in some servers. As far as I can tell, the wording is the way it is to make 410 be usable at all; basically it allows the server to have a limited memory of things-that-were-there-and-now-are-gone, which is still better than not providing this information to the client. (BTW, I noticed your attack and will ignore it...) > To me, "the discretion of the server owner" means that it's my URL and I > can respond to requests for it however I deem appropriate. There is no > mention in the 410 description about the client repeating the request, > and even the 403 response calls that a SHOULD NOT not a MUST NOT. So, > are you sure that I MUST NOT change a 410 into a 200 at my discretion > as the naming authority? Of course a server can do whatever it wants. However, that's not a license to actually do that in general. One point of a protocol with well-defined (and not-so-well-defined...) protocol elements like status codes, headers, etc is that independently developed clients can actually use it. If you bend the rules too much, generic clients will fail to work with your server, that's it. When you send a 410 to a generic client, you're telling it: there's nothing here anymore, and this isn't going to change soon. So in general, you wouldn't send that status to a client in case you want the client to access it again. Now, if you have your own *specific* client, that knows otherwise, fine. That's then a closely coupled relationship between client and server. But in this case, it really doesn't matter what you use, because it's not what the spec says. It will work, but just with that specially written client. > Are you still insisting that a client, once it knows that a resource has > responded 410, can safely expect that no resource will ever again be > identified by that URL? And that I can't expect a client or intermediary > to send a DELETE request to the origin server if it knows a resource has > been marked 410 Gone? Or are you perhaps reading something into the I'm just pointing out that if you send out a 410 to a client, it should take that literally, and assume that the resource is gone, and there's no point for it in repeating the request. And no, you can't *expect* a client to send subsequent requests to that URL, unless there's out-of-band knowledge about that behavior. > spec on the 410 response which means that, although a 410 doesn't have > to be permanent, it must remain some sort of 4xx response? Because I > just don't see where the spec supports _any_ of your arguments. I think this is exactly what it says: "The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent.". So, was the original question: "How do I do a two-stage delete with HTTP that generic clients will grok?" My answer to *that* is that I don't think there's an interoperable solution to that. An HTTP compliant way seems to implement the DELETE as a MOVE request, and to provide the client with the new URL (in a trashcan), as proposed earlier on this thread. Best regards, Julian
* Julian Reschke <julian.reschke@...> [2007-07-07 09:40]: > Please stop the WebDAV bashing. WebDAV does indeed use new > methods where new resources may have worked as well, but that's > a completely separate argument :-) There are number of bogus claims against WebDAV that I am happy to correct, but I happen to think its bias to introduce methods over resources is incontrovertible (and has obvious parallels to the discussion at hand). Therefore sorry, but not gonna. :-) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Eric J. Bowman wrote: > On my workstation, I'm always deleting files twice. The first time I > delete a file, the file itself is not altered or moved, yet now it > appears in my "trash" until I delete it for a second time. When the > file is in the trash, its status is "used to exist" but after it's > been deleted from the trash its status is "not known to have existed." > > I'm simply applying this everyday computing paradigm to an HTTP server. This is way off on several counts. Firstly, you aren't applying an "everyday computing paradigm to an HTTP server". You're applying a file-system paradigm to an HTTP server. You're starting off with a category error, since HTTP servers aren't file systems. Second, you don't delete a file twice. Depending on which metaphor you prefer you are either first moving a file to a new location and then deleting it, or else first deleting it and then deleting another version of it from a special location for deleted files. If you want to model the first metaphor you would do a POST to some resource-moving resource and then a DELETE. If you want to model the second you would do a DELETE and then somehow GET the location of the special location for deleted resources. DELETE deletes. I think it's made perfectly clear that DELETE deletes in RFC 2616. When you have deleted a resource, it is deleted. DELETE /parrot HTTP/1.1 does not move the parrot to /pining/parrot He's not pining! He's passed on! This parrot is no more! He has ceased to be! He's expired and gone to meet his maker! He's a stiff! Bereft of life, he rests in peace! If you hadn't nailed him to the perch he'd be pushing up the daisies! His metabolic processes are now history! He's off the twig! He's kicked the bucket, he's shuffled off his mortal coil, run down the curtain and joined the bleedin' choir invisibile!! THIS IS AN EX-PARROT!
On 7/9/07, Jon Hanna <jon@...> wrote: > Second, you don't delete a file twice. Depending on which metaphor you > prefer you are either first moving a file to a new location and then > deleting it, or else first deleting it and then deleting another version > of it from a special location for deleted files. You seem to be assuming that the client is requesting that the "move to trash" happen. AFAICT, it is not with either the DELETE method or (in Windows) right-click-delete action (a drag-to-trash action would be different, of course). > DELETE deletes. I think it's made perfectly clear that DELETE deletes in > RFC 2616. When you have deleted a resource, it is deleted. Sure, but it's also up to the server to decide what constitutes deletion, at least within the bounds of the definition in 2616. By that measure, what Eric describes seems fine. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Jon Hanna <jon@...> writes: > DELETE deletes. I think it's made perfectly clear that DELETE deletes in > RFC 2616. When you have deleted a resource, it is deleted. DELETE deletes hardly explains things since the discussion revolves around what ''delete'' actually entails. I prefer to say that the resource and the URL have been unbound from each other. When Eric said that Aristotle's example changes the DELETE semantic from "remove" to "move", he is tying himself too closely to the lower-level model. One could view all resources as being bound to at least two URLs at the beginning of its life-cycle. One of the URLs is created and managed automatically by the system. It stays, for the most part, invisible until the other URL is DELETE-ed. When that happens, this system-managed URL is made known through various mechanism, e.g.: including it in the DELETE response. This way, there is no moving being done. DELETE removes. It removes one of the bindings. It unbinds a resource from the URL being DELETE-ed. The fact that a system does not create the binding to a system-managed URL at the beginning of a resource's life-cycle within the lower-level point of view or creates one only upon a DELETE request, does not mean that logically it has not been there since the beginning. Imagine a coder that writes a loop and a compiler that unrolls said loop in the final translation. Do you say that there is a loop because the coder meant to have a loop -OR- there is not a loop because the final rendition does not contain any looping construct. I say there is a loop because caring how a logical view is rendered in the lower-level unnecessarily restricts what can be done during the translation. This also applies to human renderer too like you and I who have to translate diagrams-on-napkins to some code. YS
"Eric J. Bowman" <eric@...> writes: > Sorry about that, again, everyone. One of these days I will be changing > mail providers. I hope changing mail provider also mean changing mail agent too. Your current mail agent is not message-threading-friendly: it does not have nor propagate any In-Reply-To or References header, nor any in lieu value. With this email, I also apologise if my emails ever rehash things that have been disclaimed before because it's really is difficult to follow the different conversation threads manually. YS.
* Yohanes Santoso <yahoo-rest-discuss@...> [2007-07-09 15:05]: > I prefer to say that the resource and the URL have been unbound > from each other. > > When Eric said that Aristotle's example changes the DELETE > semantic from "remove" to "move", he is tying himself too > closely to the lower-level model. The funny thing is that “resource is unbound from one of its names” is how deletion works on Unix filesystems as well! Eric imposes a restriction that not even filesystems follow. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi Eric,
to put things in context, all of my following points below apply
to the following protocol:
> DELETE /foo/bar
< 200 OK
< Content-Type: application/vnd.exampleorg.tombstone+xml
<
< <tombstone>
< <dead href="/foo/bar" />
< <epitaph href="/deleted/foo/bar" />
< </tombstone>
> GET /foo/bar
< 410 Gone
> DELETE /deleted/foo/bar
< 204 No Content
> GET /foo/bar
< 404 Not Found
All other suggestions I made were misguided.
With that in place, onward…
* Eric J. Bowman <eric@...> [2007-07-07 02:05]:
> >The fact remains that this design does not have problems with
> >idempotency nor does it require as much hardwired knowledge
> >about the specific semantics of your protocol from clients
> >(aka REST-RPC hybrid).
>
> But it does require the server connector to understand the
> semantics of DELETE to mean something other than "remove" in
> addition to meaning "remove" depending on the URL the DELETE
> request is sent to, or some other shared knowledge between
> client and server.
So does your design. It requires clients to know that 410 Gone
means something in other than 410 Gone in addition to 410 Gone.
> In my setup, the response to a DELETE request is
> straightforward -- the status of the resource changes to
> reflect the request, no matter what client is making the
> DELETE.
So it does in mine.
> The optional, sysadmin-only second DELETE does require
> knowledge of the specific protocol in that it must have an
> If-Match header, but the semantics of making such a DELETE
> request are still "remove", which doesn't break the Uniform
> Interface constraint.
So it does in mine. However, my design requires the client to
understand a specific media type. This is in line with the REST
constraints.
It does not require the client to have knowledge of the
overloading of a status code, like yours does, which breaks
interface uniformity.
In other words, all of your objections/support claims apply
equally to both of protocols, except that mine uses hypermedia
where yours does not, and yours breaks uniformity where mine does
not.
* Eric J. Bowman <eric@...> [2007-07-07 05:30]:
> >> Or, to phrase that better. I can do whatever I want with the
> >> origin server, but REST constrains the behavior of the
> >> _connector_ that origin server uses to communicate with the
> >> outside world.
> >
> >Citation please. I disagree that there is any part of REST
> >which imposes any particular behaviour on the server.
>
> "Server" means both the origin server component, and the server
> connector. REST dictates that the server connector meets the
> generic interface constraint, meaning the server connector must
> behave in a very specific way that may be generically
> understood by clients as well as intermediary caches. REST
> also dictates that the origin server manage the namespace in
> such a way that the semantics of the mappings are static.
What? No. A RESTful server is not a filesystem. It is perfectly
fine for a successful PUT to create 15 new resources in addition
to the one that was stored at the request URI, if the server uses
the content of the entity body to expose derived resources.
Likewise it is perfectly fine for a DELETE that changes the
server state in more ways than just unbinding a resource from a
URI to result in the creation of resource that expose this other
state.
REST is not CRUD. Can we get away from that please?
> "An origin server uses a server connector to govern the
> namespace for a requested resource. It is the definitive source
> for representations of its resources and must be the ultimate
> recipient of any request that intends to modify the value of
> its resources. Each origin server provides a generic interface
> to its services as a resource hierarchy. The resource
> implementation details are hidden behind the interface."
This in no way contradicts what I said.
* Eric J. Bowman <eric@...> [2007-07-07 02:55]:
> >Why do you keep thinking in terms of files? Files are
> >irrelevant. What does moving a file to a URI even mean?
>
> Sorry, I meant to use the word "source", which could be a file,
> or it could be a database cell, or a combination of both, or
> something else entirely. But if you are changing the
> identifier of that source, you are making it a new resource,
> which is either a MOVE or a COPY. All I am doing is flagging
> the source as having been removed, no new URL required.
Where did I say to change the identifier?
I said that the server responds by exposing a different resource
at a different URI.
You’re putting words in my mouth that never came out of it. (Or
my keyboard, as the case were.)
> >You're exposing your knowledge of the previous existence of a
> >resource as a separate resource.
>
> I suppose you could do it that way if you wanted and could make it
> RESTful,
The design I proposed *is* RESTful. I told you which constraints
*your* design violates, so if you want to claim otherwise about
mine, please return the favour.
> but I am merely representing the resource as having been
> removed, not assigning it a new identifier that must be
> interpreted as having the same meaning as a 4xx response even
> though it's giving a 200 OK response. How, by dereferencing
> the URL which includes "/trash/" in its path, does the server
> convey to me that the file has been removed? If I must infer
> this from the URL then I'm forgetting that URLs are opaque.
No, you’re not. The interpretation of that URI comes from the
hypermedia which the server returns upon DELETE, not from a
substring inside the URI.
What matters is that the URI is found in the `epitaph` element of
the response, not that it is rooted at `/deleted/`. You could
just as well return
<tombstone>
<dead href="/foo/bar" />
<epitaph href="/xyzzy/frobnitz/veeblefitzer" />
</tombstone>
and the protocol would work just the same.
Hypermedia as the engine of application state.
> >I admit that responding with a redirect is the wrong answer.
> >It took me a few iterations to get to a 200 response to a
> >DELETE with a link in the entity-body, but that is the right
> >approach.
>
> I agree that a link in a 200 or 204 response is better than a
> redirect, but I still believe such a response breaks the
> Universal Interface constraint by tunneling "move" through
> "remove" and that a PUT followed by a DELETE is a RESTful, RFC
> 2616-based solution.
It doesn’t tunnel anything. Nothing is getting moved.
State that means “I remember something about that one resource”
gets exposed with a new URI.
> >The server responds to the DELETE by saying “OK, it’s gone;
> >here’s a description of how you can also delete my memory of
> >its previous existence by deleting the following resource.”
>
> You're saying that sending the client a different URL that also
> needs deletion to fully remove the resource, is superior than
> sending two DELETE requests to the same URL to achieve the same
> thing. I'm still not seeing the need for the added complexity
> of executing a MOVE as part of a DELETE request, and I still
> don't understand how changing an URL to reflect state in the
> path segment is understandable by intermediaries who only
> interpret 410 Gone as meaning "removed", not a 200 OK response
> from a different URL that includes "/trash/" in the path.
There is no moving. There is no interpretation of URI paths.
> Once you've assigned a "deleted" URL to the resource, you now
> have two identifiers for the same resource.
No, I have a new identifier for a new resource.
> In and of itself, this is not a problem, except that each URL
> gives a different representation of resource state (one is 4xx,
> the other 200). Which one is authoritative about the resource
> state being "removed", the 404 or 410 response, or the 200 OK
> response? Wouldn't this confuse user-agents, and users?
No, there are two representations of two resources; while one of
the resources is *about* the other resource, it is false to say
it *is* the other resource. So if you ask the server for the
resource, it authoritatively states that this resource is gone,
and if you ask it whether this resource used to exist, it
authoritatively responds in the affirmative. There is no
contradiction.
> >The resource and the server’s memory of its existence are two
> >separate Things, and should be exposed separately.
>
> I'm sorry, but I see all of this as simply changing the state of
> one resource. First, it exists. Then, it is gone. Then, as an
> option, it was never there. But none of this implies that the
> server has forgotten, or should have forgotten, about the
> resource. I am merely altering the response to requests for
> one resource, to reflect the current state of that resource, by
> returning a status code.
The status of that resource should itself be exposed as a
resource if you want clients to be able to manipulate it. If you
want clients to be able to manipulate any aspect of server state,
then that state must be exposed as a resource if you want to
comply with the REST constraints.
> >Your double-DELETE is a very surprising interpretation of what
> >RFC 2616 allows. You are effectively overloading the meaning
> >of 410 in a way that the uniformity constraint reserves for
> >the entity body.
>
> ??? The Uniform Interface constraint pertains to request
> methods and their corresponding response codes. Where does RFC
> 2616 tell me that the server can no longer accept requests once
> a resource has had its status changed to 410 Gone? And where
> does anything say that response codes apply to the entity body?
> They convey the status of the resource, but the response may
> contain both resource headers and entity headers.
It’s not the second DELETE where you break the uniformity
constraint, it’s the _first_.
> RFC 2616 clearly allows a DELETE to change the status of a
> resource to either 404 or 410, this is exactly what the Uniform
> Interface constraint means. There is no restriction in either
> RFC 2616 or REST which states that the resource must respond
> 200 OK before a DELETE request may be accepted. In fact, RFC
> 2616 clearly states that a resource responding 404 or 410 can
> still exist -- it may just be a matter of privilege level,
> where authorization is required before a GET will respond 200
> OK.
Exactly!! That is how your design breaks the constraint. You
spell it out in detail and then fail to realise the consequences
of what you said: the client cannot assume that 410 Gone means
anything but 410 Gone! But you expect the client to make such an
assumption. Your protocol breaks uniformity at that point.
> >You seem to be following the WebDAV school of HTTP which
> >considers resources somehow equivalent to files on the
> >server’s disk and prefers to model additional aspects of state
> >by putting them into the method instead of exposing them as
> >resources.
>
> That characterization couldn't be further from my view of
> things. We are discussing a situation where there is both a
> resource, and a source, I use the term "source file" because
> that is exactly what I am discussing in this thread -- my
> implementation, which in this case is using a file. I used the
> example I used because too many people are claiming that the
> deletion of a resource must result in the deletion of the
> source. So it is just a narrative convenience to speak in
> terms of a DELETE only changing the status of the resource
> without touching the source "file", because only the resource
> mapping gets deleted -- or rather, has its status changed to
> express a "removed" state to the requesting client.
>
> My application uses one URL and content negotiation to serve
> four different "text/html" representations and three
> "application/xhtml+xml" representations (plus one Atom and one
> PDF) depending on client capability, so I would have to say
> that I am keenly aware of the separation between the HTTP
> resource/representation model and the file-centric models of
> FTP and WebDAV.
Yes, very fine; except that you then go on a drop a clonker like
the following:
> >URI proliferation is about multiple names for the same
> >resource. I don’t understand how this applies to my proposed
> >protocol design. From a REST point of view, URI starvation
> >(where you don’t expose sufficient state as separate resources
> >and overload aspects of other messages instead) is worse.
>
> I have to disagree, there. If I have one resource which has a
> variety of possible states, then I want the response to a
> request for that resource to reflect the current state of the
> resource, in the retrieved representation -- either as an
> entity body or as a control code. I do not want to change the
> semantics of the mapping of my resource when state changes,
How is that not a file-centric world view? You are fixated on how
your URIs map to your filesystem, which is what I was saying: you
don’t want to make up resources not backed by your filesystem,
which is a WebDAV-ish worldview.
/deleted/foo/bar doesn’t have to be a file created in your
filesystem by the server upon deletion of whatever file /foo/bar
maps to, and in fact probably shouldn’t be.
> i.e. the resource is a conceptual mapping that does not include
> any information about the resource state, that can only be
> conveyed in a representation of the resource, not by deducing
> the meaning of the assigned URL.
No one said to deduce meaning from any URI.
You can use /xzyzzy/frobnitz/veeblefitzer in place of
/deleted/foo/bar and /bender/casino/blackjack in place of
/deleted/foo/baz as long as the media type you return from a
successful DELETE has well-defined meaning.
> If I have some other semantic mapping for the same resource, then I
> assign it a new URL, i.e. sometimes I want to describe the "thing of
> the day". Under your method, that would need to be changed to "the
> deleted thing of the day" if someone deletes "a thing" at the wrong
> time.
There is no other semantic mapping for the same resource. There
is a mapping for knowledge *about* the resource, which is not the
same thing as the resource itself.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
> >This is way off on several counts. > >Firstly, you aren't applying an "everyday computing paradigm to an HTTP >server". You're applying a file-system paradigm to an HTTP server. >You're starting off with a category error, since HTTP servers aren't >file systems. > I suggested a thoroughly simple mechanism, others have insisted that it be compared to a trashcan, I have used that metaphor to help clarify my position, now you are accusing me of trying to apply that metaphor to HTTP, which was never my intent in the first place. Let me try my comparison again. Most computer users are familiar with the metaphor of deleting a file twice. The fact that the filesystem doesn't actually delete the file until it's deleted from trash, means nothing to the user because, from *the user perspective* the file is deleted twice. The first delete isn't called "move to trash" it's called "delete". I wish nobody had ever mentioned trashcans. That had nothing to do with my original post, only the arguments made against it. Now you're attacking my response to those arguments as proof that I'm categorically wrong? Please, if you're going to argue against what I am doing, then focus on what's wrong with what I'm doing, not why you don't like my rebuttal to the arguments of others trying to frame this debate as "like a trashcan". > >DELETE deletes. I think it's made perfectly clear that DELETE deletes in >RFC 2616. When you have deleted a resource, it is deleted. > Where have I broken this constraint? When a DELETE is made to an URL, the status changes to 410 Gone. From the perspective of the client, the DELETE succeeded, right? How is this an example of DELETE not deleting? > >DELETE /parrot HTTP/1.1 does not move the parrot to /pining/parrot > Exactly the argument I have been striving to make. Are you sure you replied to the right person, here? I have quite strenuously objected to using DELETE to move a file, or changing the URL to indicate a "deleted" state for the resource. So what is your point, Jon? I have a resource, I DELETE a resource, that resource then responds 410 Gone... what am I doing wrong, exactly? Where does RFC 2616 say I must actually delete the source? -Eric
> >The funny thing is that “resource is unbound from one of its >names� is how deletion works on Unix filesystems as well! Eric >imposes a restriction that not even filesystems follow. > What are you talking about? What restriction am I imposing, and what does a filesystem have to do with REST interactions? -Eric
On 7/9/07, Eric J. Bowman <eric@...> wrote: > I suggested a thoroughly simple mechanism, others have insisted that it > be compared to a trashcan, I have used that metaphor to help clarify my > position, now you are accusing me of trying to apply that metaphor to > HTTP, which was never my intent in the first place. *waves hand* That was me. And Jon, I'd argue there's nothing wrong with it, especially since I said the move-to-trashcan (a/k/a "mark for deletion") should probably be done with a PUT... there was never a double-DELETE issue there. *My* web server, at least, isn't so very far from a file system. But though I'm a Perl programmer, I haven't messed with Parrot, so I could be wrong... Also, Eric, your trusty hamblocker... er, spamblocker has bounced the Gmail invite I sent you, but I think they're unnecessary these days anyway. It'd solve most of your (well, our) list-related problems.
> >to put things in context, all of my following points below apply >to the following protocol: > > > DELETE /foo/bar > < 200 OK > < Content-Type: application/vnd.exampleorg.tombstone+xml > < > < <tombstone> > < <dead href="/foo/bar" /> > < <epitaph href="/deleted/foo/bar" /> > < </tombstone> > > > GET /foo/bar > < 410 Gone > > > DELETE /deleted/foo/bar > < 204 No Content > > > GET /foo/bar > < 404 Not Found > In order for this to work, the client must somehow know that the URL being sent back in <epitaph> is the same resource that the user just requested be DELETEd. In other words, your DELETE semantic is "move". If you want to assign a new URL to the resource, then you PUT the resource to the new /deleted/ URL followed by a DELETE to the no- longer-desired URL. But even that isn't right, because the old URL is still identifying a concept that has not been DELETEd from the server, so you've only accomplished changing the semantics of the mapping, which is the one thing REST doesn't allow you to change about your resource identifiers. See, if you then GET /foo/bar and receive a 410 response, the server is lying, because it should really respond with a 307 redirect to /deleted/foo/bar since the resource isn't really gone, just moved, *and the server knows it* so it should tell the client. > >So does your design. It requires clients to know that 410 Gone >means something in other than 410 Gone in addition to 410 Gone. > No, it most certainly does not. Some people here assume that a client MUST NOT ever repeat a request which results in a 410 Gone. If there were any support for such a view in the standards, then the comments suggesting that my 410 Gone is somehow "overloaded" might have a point, but under the current standards, I'm sorry but you do not. Your premise is that by sending a DELETE to an URL which is known to respond to GET requests with 410 Gone my client has violated the restriction that says once an URL responds 410 Gone to a GET request, all other request methods are disallowed and the condition is permanent for ever and ever. But, this premise is not supported by the standards, therefore you can't use this premise to critique my standards-compliant use of the 410 response. > >> In my setup, the response to a DELETE request is >> straightforward -- the status of the resource changes to >> reflect the request, no matter what client is making the >> DELETE. > >So it does in mine. > Except your status code is wrong. The server knows for a fact that the resource still exists in a new location, and that as long as the resource is in /delete/ it may be restored to its original location. So I would suggest that the server is lying, in sending a 410 Gone response instead of a 404 -- or a 307 to to the new location. The client requested a DELETE and got a MOVE instead. > >So it does in mine. However, my design requires the client to >understand a specific media type. This is in line with the REST >constraints. > Sorry, but no. Your specific media type instructs the client to interpret the response to a DELETE request as a MOVE. But I still say that a media type cannot be used to change the semantics of an HTTP request. A media type is a data format, not an API, if the semantics of a DELETE are MOVE for one media type but not for another, then there is no generic interface present in your app as clients must not only know how to render that media type for display, but also how to change their connector behavior in order to re-interpret HTTP for that media type. A generic-interface-based client will GET /deleted/foo/bar and receive a 200 OK status code. This does not reflect any sort of "deleted" state for the resource, only "present and accounted for". > >It does not require the client to have knowledge of the >overloading of a status code, like yours does, which breaks >interface uniformity. > I'm sorry, but you have not explained how I am "overloading" the 410 Gone response. A DELETE request is received, the server then responds 410 Gone and the resource is no longer available at that URL, nor has it been moved to some other URL. So I fail to see how the results of my interaction are anything other than what any HTTP client would expect, as no subsequent request to that or any other URL will respond with a 200 OK with the entity that was just DELETEd. It may change to 404 at some point, or it may return to a 200 OK status at some point, but only if some other action is taken first -- clients cannot infer permanence from any response. > >In other words, all of your objections/support claims apply >equally to both of protocols, except that mine uses hypermedia >where yours does not, and yours breaks uniformity where mine does >not. > Sorry, but no. While you show an understanding of HEAS, you take it too far if you claim that a client sending a DELETE and receiving a 204 No Content response breaks with HEAS because no link was clicked and no URL was received in the response. My DELETE actually deletes, your DELETE moves (by assigning a new URI) without deleting anything. > >REST is not CRUD. Can we get away from that please? > That's a strawman argument. I don't see how anyone reading my posts would get the impression that I treat REST as CRUD. Claiming that's what I'm saying then refuting that claim is disingenuous. > >The design I proposed *is* RESTful. I told you which constraints >*your* design violates, so if you want to claim otherwise about >mine, please return the favour. > I've been trying, really I have. If you are imparting MOVE semantics to a DELETE request then you are directly violating the Uniform Interface constraint, because you are overloading DELETE when two separate HTTP actions gets the job done without needing a media type to define special interaction semantics. If you want to implement a trashcan for some reason, then be my guest. I do not want to implement a trashcan. But if you do, then you should first PUT the resource to the new URL, then DELETE the resource at the old URL and respond 404, not 410, because you are implying that the resource may be un-deleted at some point. Or better yet, use a 307. My 410 Gone is not a trashcan and has no mechanism to un-delete anything from a trashcan. So I don't understand why you keep insisting that I must somehow implement a trashcan in order to be RESTful. I DELETE a resource, subsequently it responds 410 Gone and really is gone, not available at some new URL -- what is the problem with that? > >No, you’re not. The interpretation of that URI comes from the >hypermedia which the server returns upon DELETE, not from a >substring inside the URI. > You're using media type to describe interaction semantics, which is not their purpose. The purpose of a media type is to provide hints on rendering the entity for display, not to dictate nonstandard client behavior. You can't use a media type to give DELETE the semantics of MOVE, because a media type does not describe a networking API. A media type can tell a client how to interpret an URL found in the markup, but it can't tell the client anything about the interaction which led to that response. My question remains. How are you instructing the client that the DELETE it just sent was handled as a MOVE? If the client must *infer* this from the media type then the interface is hardly generic -- in a generic interface media type does not affect method implementation. > >What matters is that the URI is found in the `epitaph` element of >the response, not that it is rooted at `/deleted/`. You could >just as well return > > <tombstone> > <dead href="/foo/bar" /> > <epitaph href="/xyzzy/frobnitz/veeblefitzer" /> > </tombstone> > >and the protocol would work just the same. > >Hypermedia as the engine of application state. > Of course, to any client which doesn't understand your media type, my objections stand in that there is no way to know that the DELETE was really a MOVE. You can't use a media type to redefine HTTP interaction semantics like that, because that non-generic behavior can't be repeated except by a client with knowledge of the media type. Internet Explorer has no clue about the application/xhtml+xml media type, but it can still GET such a representation and ask the user to save it. Only the rendering is affected, not the interaction semantics. In your system, a client without knowledge of your media type cannot interact with the system because the interface is not generic. How does a client without specific knowledge of your system infer that a DELETE has been treated as a MOVE? It could, if you were using a 307 redirect instead of telling the app that the resource was removed entirely by sending 410 Gone. But that wouldn't make it REST, because the server's response to the DELETE method is to rename the resource -- imposing a "move" semantic on DELETE has nothing to do with a generic interface. > >It doesn’t tunnel anything. Nothing is getting moved. > You are moving the resource for which a DELETE was requested, to another location, instead of following the client request to DELETE the resource. Worse, you're telling clients that the DELETE request was successful and using 4xx on subsequent responses when the resource still exists at a new location, instead of redirecting. > >No, I have a new identifier for a new resource. > Ugh. A resource is a concept, a URI identifies that concept. You can have two URIs identify the same concept, yes. But what you are doing is assigning two different meanings to two different URIs which both identify the same resource. The concept of the resource identified does not include its state -- "resource" and "deleted resource" identify the same resource. You do not have a "new, deleted resource". You have a new status for the existing resource, if you are using REST. If you interpret a DELETE as a MOVE, then you are creating a new identifier for the same resource, except now one of the URLs gives a 4xx error while the other indicates 200 OK -- the semantics of your mapping are not static in such a case. You have a new identifier for the same resource, but it is out of sync with the old identifier for the same resource. Which is why I suggest using a 307 redirect if you want to implement such a trashcan setup, which is still not what I'm doing anyway. > >The status of that resource should itself be exposed as a >resource if you want clients to be able to manipulate it. If you >want clients to be able to manipulate any aspect of server state, >then that state must be exposed as a resource if you want to >comply with the REST constraints. > No, this is not REST. In REST, a resource has an identifier and a status. If the status needs changing, in REST a representation of the resource is manipulated in order to change the status of that resource. The identifier of the resource remains unchanged when a representation is altered to reflect a new state of that resource. I am really at a loss as to how you have come up with this "state must be exposed as a resource" bit, it has nothing at all whatsoever to do with REST. > >It’s not the second DELETE where you break the uniformity >constraint, it’s the _first_. > No, I'm sorry. I have a resource, I DELETE the resource, the resource responds 410 Gone afterwards. Any client using any RESTful protocol would expect this to happen -- changing a resource from responding "success" to responding "failure" is the expected behavior of the DELETE method. You break from this by giving the resource a new name, so that requests for that resource respond "success" if the new name is known. But the request was for the resource to be deleted, not remapped to a new identifier. > >Exactly!! That is how your design breaks the constraint. You >spell it out in detail and then fail to realise the consequences >of what you said: the client cannot assume that 410 Gone means >anything but 410 Gone! But you expect the client to make such an >assumption. Your protocol breaks uniformity at that point. > No, I'm sorry, but you are imposing a constraint which does not exist in the spec. There is nothing about receiving a 410 Gone response to a GET request which precludes the client from sending a request to the same URL using a different (or even the same) method. You are expecting a client to assume permanence from a 4xx response, and claiming that by not refusing to send any more requests to that address all existing clients are in violation of RFC 2616. If my browser encounters a 410 response, and I click "reload" the request is repeated. Why? Because the client cannot assume permanency of a 4xx response. Nowhere does RFC 2616 state that the client MUST NOT repeat such a request -- if it did, my app behavior would indeed be nonstandard by expecting a client to assume that it can repeat such a request. If I can repeat a GET request, why can't I make a request of that URL using a different method? I've responded to a DELETE request by making the resource unavailable, what's wrong with that? > >How is that not a file-centric world view? You are fixated on how >your URIs map to your filesystem, which is what I was saying: you >don’t want to make up resources not backed by your filesystem, >which is a WebDAV-ish worldview. > Huh? We're talking about a specific implementation which uses a file, you are the one claiming that you can look into some crystal ball and infer that I believe one way or another based on that. In this example I use a file, in my real-world application no HTML is ever written to disk, everything is generated on the fly by transforming database output. There are no "files" to delete. But if a DELETE request is received, the resource will respond 410 Gone instead of generating a response on the fly be transforming database output. Much easier to talk in terms of "a file", except for all the people around here who then make nutty statements about how what I say must then only apply to WebDAV. But let's get back to talking about this particular implementation, and not making assumptions about how my use of a file in this case proves that I don't know what the hell I'm talking about, because if you're only interested in a pissing match I will ignore you. If you need help implementing a RESTful trashcan, then please start a thread on it because it's off-topic here and only causing confusion for others. -Eric
* "A. Pagaltzis" <pagaltzis@...> [2007-07-09 19:23]:
> to put things in context, all of my following points below apply
> to the following protocol:
>
> > DELETE /foo/bar
> < 200 OK
> < Content-Type: application/vnd.exampleorg.tombstone+xml
> <
> < <tombstone>
> < <dead href="/foo/bar" />
> < <epitaph href="/deleted/foo/bar" />
> < </tombstone>
>
> > GET /foo/bar
> < 410 Gone
>
> > DELETE /deleted/foo/bar
> < 204 No Content
>
> > GET /foo/bar
> < 404 Not Found
>
> All other suggestions I made were misguided.
Apparently this was so severly misunderstood that Eric fabricated
arguments out of whole cloth to put into my mouth again.
Let me try again, and this time I shall be even more explicit
because apparently the self-evident is not so.
> DELETE /foo/bar
< 200 OK
< Content-Type: application/vnd.exampleorg.tombstone+xml
<
< <tombstone>
< <dead href="/foo/bar" />
< <epitaph href="/deleted/foo/bar" />
< </tombstone>
> GET /foo/bar
< 410 Gone
> GET /deleted/foo/bar
< 200 OK
< Content-Type: application/vnd.exampleorg.tombstone+xml
<
< <tombstone>
< <dead href="/foo/bar" />
< <epitaph href="/deleted/foo/bar" />
< </tombstone>
Oh, lookit. That’s not the resource that was at /foo/bar.
> DELETE /deleted/foo/bar
< 204 No Content
> GET /foo/bar
< 404 Not Found
Now Eric, if you could please tell me where the MOVE semantics
are, be my guest.
Otherwise, please go back, read my mail again, and write a new
response; this time without putting arguments in my mouth that
others made, but I didn’t (like permanency of 410), or
fabricating arguments out of whole cloth that I never put forth
(like claiming that I said that responding 204 to DELETE is not
RESTful or some weird nonsense like that; never mind that it
should be obvious from the above proposal that I can’t be
thinking that), or fixating on semantics I never proposed (like
moving a resource from one place to another) to the point where
I can’t stand to write a response because every second inline
reply is “who was talking about that?”.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
On 7/9/07, A. Pagaltzis <pagaltzis@...> wrote: > Apparently this was so severly misunderstood that Eric fabricated > arguments out of whole cloth to put into my mouth again. Be nice.
Aristotle, what is your point? I described a mechanism which works for me, you are talking about some trashcan thing that has no bearing on what I am talking about, and challenging me to tell you why it is not better than what I have, when it has absolutely nothing to do with what I am trying to do. My use case is to DELETE a resource, causing subsequent GET requests to respond 410 Gone. Issuing a conditional DELETE changes the 410 Gone to a 404 Not Found. That is the topic of this thread, not moving things into trashcans. What does a trashcan have to do with any of this? Someone said that deleting a file twice was counterintuitive, I only mentioned trashcans in response to that claim -- we all delete things twice, all the time, so how can doing so be counterintuitive? This is a user perspective, nothing to do with any actual filesystem trashcan implementation. Heck, when I close a tab in Opera it goes into a trashcan, which I can then empty -- no filesystem there at all, but still a double-delete. If you would like to talk about something completely different, then please use a different thread, instead of trying to claim that my simple thing can't be right because it doesn't involve moving anything into a trashcan, if that's your point, honestly I don't know. To reiterate what this thread is about: I have a resource /foo which responds 200 OK to a GET request. I issue a DELETE request to /foo. Now, a GET on /foo responds 410 Gone. Imagine my surprise that this is controversial enough without even bringing up the second DELETE to have exploded like it has, with you even going so far as to tell me this breaks the Uniform Interface constraint of REST. I just don't know how to respond to that, other than not to anymore. :-( -Eric
On 10 Jul 2007, at 03:46, A. Pagaltzis wrote: > > GET /foo/bar > < 410 Gone > > > GET /deleted/foo/bar > < 200 OK > < Content-Type: application/vnd.exampleorg.tombstone+xml > < > < <tombstone> > < <dead href="/foo/bar" /> > < <epitaph href="/deleted/foo/bar" /> > < </tombstone> Never mind the "Two DELETEs" debate, I think this is a reasonable solution, except that I had to look up "epitaph" in the dictionary. If the server is to differensiate between 404 Not Found and 410 Gone it clearly must at least remember that the 410-URI used to exist. Remembering this as a "tombstone" seams perfectly clear to me, and there shouldn't be a reason why the tombstone itself also could not be a resource. In some cases you don't want to give away that a certain resource used to exist (just like you might want to say 404 Not found instead of 401 Unauthorized), and in those cases you can DELETE the tombstone to make the appearance that the resource never existed. Also maybe your storage capacity for tombstones is running out, and some maintenance script can delete very old tombstones. (I guess some dates would be useful to include in the tombstone metadata or document) We can assume that it's not interesting to have tombstones for deleted tombstones, so a deleted tombstone can give 404 right away. (BTW, one way to avoid saving tombstones is to just use incrementing identifiers, if your current identifier counter for /foo is 4 and you have /foo/4, /foo/3 and /foo/1, then a GET /foo/2 can be replied with 410 Gone. /foo/5 is 404 Not found, because the identifier counter (typically sequences in a database) is not higher than 4.) -- Stian Soiland, myGrid team School of Computer Science The University of Manchester http://www.cs.man.ac.uk/~ssoiland/
> >I'm still confused. I fail to see how you can claim idempotence with >this setup. I think that's what all this comes down to. In the system >you are describing, 2 deletes to the exact same resource URI do two >different things. > >"Methods can also have the property of "idempotence" in that (aside > from error or expiration issues) the side-effects of N > 0 identical > requests is the same as for a single request. The methods GET, HEAD, > PUT and DELETE share this property." > >In the end you don't have to listen to anyone. You are free to do as >you wish. Obviously there are those that disagree with your view of >things. It is of course your prerogative to decide who is in error. > I have no problem discussing the issue, within the confines of my own example. I see your point, but I define "identical request" to mean a combination of factors being identical. For instance, issuing a DELETE request to one of my 410 Gone resources will result in a 400 Bad Request result every single time that identical request is repeated. However, if you add a conditional If-Match to that DELETE request, in other words after you've *modified* the request from before, you get a different result. Where you say, "2 deletes to the exact same resource URI do two different things" I hope you understand my response is that the the 2 deletes in question are not identical requests to the exact same URI, just as a conditional GET request to a resource is not identical to an unconditional GET request to the same resource -- only one kind of GET is allowed to respond 304. The standard says identical _request_ not identical _method_. Take any everyday not-anything-to-do-with-me DELETE usage. The first time a resource is DELETEd, a 2xx response is issued. The next time that identical resource is DELETEd, a 4xx response is issued. The time after that, and every other N > _1_ time the identical request is repeated there's a 4xx response. The reason this is still idempotent under the N > 0 rule is because of the exception to that rule, in parentheses, which says "aside from error... issues" in other words, 4xx responses aren't counted. I suggest that for DELETE, N = 1 should be the rule, because frankly if, after I've DELETEd a resource (in an everyday, nothing-to-do-with- me sense) I repeat the identical request and get a 2xx success response every time, I will be left wondering what was successfully DELETEd the first time, if anyone follows what I'm saying. -Eric
Eric J. Bowman wrote: > >I'm still confused. I fail to see how you can claim idempotence with > >this setup. I think that's what all this comes down to. In the system > >you are describing, 2 deletes to the exact same resource URI do two > >different things. > > > >"Methods can also have the property of "idempotence" in that (aside > > from error or expiration issues) the side-effects of N > 0 identical > > requests is the same as for a single request. The methods GET, HEAD, > > PUT and DELETE share this property." > > > >In the end you don't have to listen to anyone. You are free to do as > >you wish. Obviously there are those that disagree with your view of > >things. It is of course your prerogative to decide who is in error. > > > > I have no problem discussing the issue, within the confines of my own > example. I see your point, but I define "identical request" to mean > a combination of factors being identical. For instance, issuing a > DELETE request to one of my 410 Gone resources will result in a 400 ...I'd make that more specific, such as 409 or 403... > Bad Request result every single time that identical request is > repeated. However, if you add a conditional If-Match to that DELETE > request, in other words after you've *modified* the request from before, > you get a different result. Hmm, are you saying that you allow the DELETE on the "gone" resource to succeed if it carries an If-Match? That seems to be a weird approach, as the 404/410 condition indicates "nothing is there", so any kind of If-Match IMHO should cause a 412 on that resource. > .. Best regards, Julian
Eric J. Bowman wrote: > Aristotle, what is your point? I described a mechanism which works for > me, you are talking about some trashcan thing that has no bearing on what > I am talking about, and challenging me to tell you why it is not better > than what I have, when it has absolutely nothing to do with what I am > trying to do. > > My use case is to DELETE a resource, causing subsequent GET requests to > respond 410 Gone. Issuing a conditional DELETE changes the 410 Gone to > a 404 Not Found. That is the topic of this thread, not moving things > into trashcans. > > What does a trashcan have to do with any of this? Someone said that > deleting a file twice was counterintuitive, I only mentioned trashcans > in response to that claim -- we all delete things twice, all the time, > so how can doing so be counterintuitive? This is a user perspective, > nothing to do with any actual filesystem trashcan implementation. > Heck, when I close a tab in Opera it goes into a trashcan, which I can > then empty -- no filesystem there at all, but still a double-delete. > > If you would like to talk about something completely different, then > please use a different thread, instead of trying to claim that my simple > thing can't be right because it doesn't involve moving anything into a > trashcan, if that's your point, honestly I don't know. > > To reiterate what this thread is about: > > I have a resource /foo which responds 200 OK to a GET request. I issue > a DELETE request to /foo. Now, a GET on /foo responds 410 Gone. > > Imagine my surprise that this is controversial enough without even > bringing up the second DELETE to have exploded like it has, with you > even going so far as to tell me this breaks the Uniform Interface > constraint of REST. I just don't know how to respond to that, other > than not to anymore. :-( > > -Eric > I'm still confused. I fail to see how you can claim idempotence with this setup. I think that's what all this comes down to. In the system you are describing, 2 deletes to the exact same resource URI do two different things. "Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property." In the end you don't have to listen to anyone. You are free to do as you wish. Obviously there are those that disagree with your view of things. It is of course your prerogative to decide who is in error. -- Aaron Dalton | Super Duper Games aaron@... | http://superdupergames.org
Hi Eric, I see that for some odd reason you switched from “MOVE semantics” to a nebulous “trashcan thing”. Far be it from me to wonder why, but even after asking you to quit putting words in my mouth, you continue to do so. * Eric J. Bowman <eric@...> [2007-07-10 09:35]: > I just don't know how to respond to that, other than not to > anymore. :-( Neiter do I. Over and out, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Stian Soiland <ssoiland@...> [2007-07-04 15:05]: > What about PUTing with the Content-Type: multipart/byteranges > [1]? If the server don't understand multipart/byteranges, it > will say 406 Not Acceptable. If the server accepts any MIME type for the resource at that URI, then it will try to store the patch as the content of the resource. If arbitrary media types are not acceptable, the PUT will probably be rejected. That’s what PUT means. > It has already been agreed that since all representations > are/can be partial on GET, so would representations that are > uploaded with PUT often be partial. Sure, but it’s a red herring here. The fact that a client can’t fill in all aspects of the resource with a single representation doesn’t mean that it *intends* the missing aspects to be filled in from the previous state of the resource; it merely gives the server licence to fill them in appropriately, probably by setting them to some default state, although deriving from some previous state of the resource is also legal. The difference between PUT and PATCH is that with the latter, the client is explicitly requesting that the previous state of the resource be taken into account, and that the entity body it sends is not a (possible) rendition of the resource, but a rendition of the differences between the previous state of the resource and the intended new state. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
At Tue, 10 Jul 2007 09:07:33 +0000, "Eric J. Bowman" <eric@...> wrote: > I have no problem discussing the issue, within the confines of my > own example. I see your point, but I define "identical request" to > mean a combination of factors being identical. For instance, issuing > a DELETE request to one of my 410 Gone resources will result in a > 400 Bad Request result every single time that identical request is > repeated. However, if you add a conditional If-Match to that DELETE > request, in other words after you've *modified* the request from > before, you get a different result. Although I believe that you may be technically correct here, it is important to keep in mind that for most people a request is a combination <method, uri, host header, entity>. Nowhere in HTTP can I think of an example of a request whose essential meaning is modified by a request header (except, as above, host, which is only not part of the URI as a matter of history). The request headers are there to make minor modifications to a request, not to change the overall meaning. Additionally, there is almost certainly going to be confusion when clients try to DELETE a resource & discover that there is now added semantics to the DELETE method in the form of If-Match, added semantics which are *required* to successfully perform a DELETE on your system. This added confusion may be worse than the technical non-idempotence of your original two stage DELETE. > […] > Take any everyday not-anything-to-do-with-me DELETE usage. The first > time a resource is DELETEd, a 2xx response is issued. The next time > that identical resource is DELETEd, a 4xx response is issued. The > time after that, and every other N > _1_ time the identical request > is repeated there's a 4xx response. The reason this is still > idempotent under the N > 0 rule is because of the exception to that > rule, in parentheses, which says "aside from error... issues" in > other words, 4xx responses aren't counted. No, the reason that multiple DELETEs are idempotent is that they have the same side effect (viz., the resource is deleted). What is returned as a response is not an issue. I am not going to suggest that you not implement this, but I must suggest that you consider, when you have received as much resistance as your two stage DELETE has, if it is worth all the trouble. Many people have had a bad reaction to various aspects of this idea. best, Erik Hetzner
> >Additionally, there is almost certainly going to be confusion when >clients try to DELETE a resource & discover that there is now added >semantics to the DELETE method in the form of If-Match, added >semantics which are *required* to successfully perform a DELETE on >your system. This added confusion may be worse than the technical >non-idempotence of your original two stage DELETE. > How so? A user on my system can issue a DELETE and the resource will be removed, there is no requirement for additional headers. What is wrong with one DELETE resulting in the resource being removed and responding 410 Gone? No added semantics involved, no If-Match required, so easy anyone can do it, nobody but me needs to know about the second DELETE as it is administrative in nature. > >I am not going to suggest that you not implement this, but I must >suggest that you consider, when you have received as much resistance >as your two stage DELETE has, if it is worth all the trouble. Many >people have had a bad reaction to various aspects of this idea. > I refuse to consider most of the "bad reactions" that have come about here, as they either quote a section of RFC 2616 without explaining just what the problem is they're seeing, or frame their objections in terms of some other example like a trashcan. The only time I will change what I am doing due to a "bad reaction" here is if the objection makes sense, not because there's a standard kneejerk objection here to anyone doing anything different. The only "is it worth all the trouble" question I ask myself, pertains to posting my ideas to this list, because the helpful responses tend to be few and far between, with lots of people assuming I must be wrong because a lot of other people assume I am wrong and start insisting that maybe that's a valid reason to not do what I am doing. The only result of this is that when it comes to REST, I will prefer to keep my own counsel. -Eric
> >Hmm, are you saying that you allow the DELETE on the "gone" resource to >succeed if it carries an If-Match? That seems to be a weird approach, as >the 404/410 condition indicates "nothing is there", so any kind of >If-Match IMHO should cause a 412 on that resource. > Well, the If-Match has to match the ETag on the 410 response, if it doesn't then the response is 412. I've been pondering this a bit further, let's say the resource hasn't been deleted, and a conditional DELETE comes in with an If-Match which matches the 200 representation's ETag. Then the resource is deleted, and the response code is 404. Unconditional DELETE requests change the response to 410, still. -Eric
At Tue, 10 Jul 2007 18:34:20 +0000, "Eric J. Bowman" <eric@...> wrote: > How so? A user on my system can issue a DELETE and the resource will > be removed, there is no requirement for additional headers. What is > wrong with one DELETE resulting in the resource being removed and > responding 410 Gone? No added semantics involved, no If-Match > required, so easy anyone can do it, nobody but me needs to know > about the second DELETE as it is administrative in nature. Sorry, I must have missed an update. I read, in your message from 02 Jul 2007 11:49:46: > If the filesize is greater than zero bytes and the DELETE is > unconditional, the response is 409 Conflict with a message body > “Unconditional DELETE request detected.” Which indicated to me that if the client tries to DELETE a resource without supplying the If-Match header, the DELETE will fail. > I refuse to consider most of the "bad reactions" that have come > about here, as they either quote a section of RFC 2616 without > explaining just what the problem is they're seeing, or frame their > objections in terms of some other example like a trashcan. If you don’t like the trashcan example you should not have brought it up. But I think we can consider it withdrawn. > The only time I will change what I am doing due to a "bad reaction" > here is if the objection makes sense, not because there's a standard > kneejerk objection here to anyone doing anything different. > The only "is it worth all the trouble" question I ask myself, > pertains to posting my ideas to this list, because the helpful > responses tend to be few and far between, with lots of people > assuming I must be wrong because a lot of other people assume I > am wrong and start insisting that maybe that's a valid reason to > not do what I am doing. The only result of this is that when it > comes to REST, I will prefer to keep my own counsel. If you feel that you have addressed people’s objections, then you have done more than most in terms of contributing to the process of interoperability. best, Erik Hetzner ;; Erik Hetzner, California Digital Library ;; gnupg key id: 1024D/01DB07E3
At Tue, 10 Jul 2007 19:30:54 +0000, "Eric J. Bowman" <eric@...> wrote: > Right, that's when I was using an If-None-Match on the first DELETE, > Aristotle suggested using an If-Match on the second DELETE instead, > and that is exactly the change I made. I should note that I'm a bit > embarrassed for not catching that myself. My mistake. Thanks. best, Erik Hetzner ;; Erik Hetzner, California Digital Library ;; gnupg key id: 1024D/01DB07E3
> >Sorry, I must have missed an update. I read, in your message from 02 >Jul 2007 11:49:46: > >> If the filesize is greater than zero bytes and the DELETE is >> unconditional, the response is 409 Conflict with a message body >> “Unconditional DELETE request detected.” > >Which indicated to me that if the client tries to DELETE a resource >without supplying the If-Match header, the DELETE will fail. > Right, that's when I was using an If-None-Match on the first DELETE, Aristotle suggested using an If-Match on the second DELETE instead, and that is exactly the change I made. I should note that I'm a bit embarrassed for not catching that myself. -Eric
Eric J. Bowman wrote: > > > > > >Hmm, are you saying that you allow the DELETE on the "gone" resource to > >succeed if it carries an If-Match? That seems to be a weird approach, as > >the 404/410 condition indicates "nothing is there", so any kind of > >If-Match IMHO should cause a 412 on that resource. > > > > Well, the If-Match has to match the ETag on the 410 response, if it > doesn't then the response is 412. I've been pondering this a bit An ETag on a 410 response doesn't make any sense at all. It implies that there is a variant that a client *could* request, but the 404/410 status implies there isn't any. > ... Best regards, Julian
> >An ETag on a 410 response doesn't make any sense at all. It implies that >there is a variant that a client *could* request, but the 404/410 status >implies there isn't any. > That's what I thought at first, too. But I took another read of REST and RFC 2616 and changed my mind: "Response messages may include both representation metadata and resource metadata: information about the resource that is not specific to the supplied representation... Depending on the message control data, a given representation may indicate... the value of some other resource... [like] a representation of some error condition for a response." (5.2.1.2) I think an ETag is representation metadata, not resource metadata. I see nothing in RFC 2616 to support caching any 4xx response, whereas 301 says it MAY be cached and 307 says it MUST NOT be cached. But you can send a no-cache directive and an ETag with any 4xx error containing an entity body, and the ETag will only pertain to (one variant of) that representation of an error condition: "Entity tags are used for comparing two or more entities from the same requested resource... An entity tag MUST be unique across all versions of all entities associated with a particular resource. A given entity tag value MAY be used for entities obtained by requests on different URIs." (RFC 2616, 3.11) I interpret this as saying that I can set ETag: "410Gone" for any 410 response on my server (the entity body is the same every time). Any resource responding 410 Gone contains this *representation metadata* which only implies that there is no other representation available as a response from that resource with the same ETag -- and there isn't. I don't see where that implies there are any variants beyond possibly a different custom 410 message than I was using the day before, because the ETag is specific to the entity body of the representation of the error condition. I won't be setting an ETag on the 404 response. > >...I'd make that more specific, such as 409 or 403... > Actually, I think an unconditional DELETE on a 410 Gone should just respond 410 Gone, just like any DELETE attempt on a 404 response will just respond 404. -Eric
Eric J. Bowman wrote: > >An ETag on a 410 response doesn't make any sense at all. It implies that > >there is a variant that a client *could* request, but the 404/410 status > >implies there isn't any. > > > > That's what I thought at first, too. But I took another read of REST and > RFC 2616 and changed my mind: > > "Response messages may include both representation metadata and resource > metadata: information about the resource that is not specific to the > supplied representation. .. Depending on the message control data, a > given representation may indicate... the value of some other resource... > [like] a representation of some error condition for a response." (5.2.1.2) Speaking just for me; I really don't care what Rest says here. What's interesting to me is whether this is compliant to RFC2616, > I think an ETag is representation metadata, not resource metadata. I see See <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.6.2>: "The response-header fields allow the server to pass additional information about the response which cannot be placed in the Status-Line. These header fields give information about the server and about further access to the resource identified by the Request-URI." > nothing in RFC 2616 to support caching any 4xx response, whereas 301 says > it MAY be cached and 307 says it MUST NOT be cached. But you can send a > no-cache directive and an ETag with any 4xx error containing an entity > body, and the ETag will only pertain to (one variant of) that > representation of an error condition: I'm not sure how caching is relevant here. > "Entity tags are used for comparing two or more entities from the same > requested resource... An entity tag MUST be unique across all versions > of all entities associated with a particular resource. A given entity > tag value MAY be used for entities obtained by requests on different URIs." > (RFC 2616, 3.11) > > I interpret this as saying that I can set ETag: "410Gone" for any 410 > response on my server (the entity body is the same every time). Any I guess we'll continue to disagree whether a 404/410 response is a representation of a resource. As far as I am concerned, it is not. It's the way how the server states that there is no representation it can send, so an ETag on that message is meaningless. > ... Best regards, Julian
> >Speaking just for me; I really don't care what Rest says here. What's >interesting to me is whether this is compliant to RFC2616, > I'm not sure how you're getting from this... "The response-header fields allow the server to pass additional information about the response which cannot be placed in the Status-Line. These header fields give information about the server and about further access to the resource identified by the Request-URI." ...to "4xx responses are not allowed to have ETags", I'm just not seeing that in there at all. A 410 response with an entity body, can obviously have an entity tag. Any entity body, can have an entity tag, according to the spec. It is "information about the response" to the request. > >I guess we'll continue to disagree whether a 404/410 response is a >representation of a resource. As far as I am concerned, it is not. It's >the way how the server states that there is no representation it can >send, so an ETag on that message is meaningless. > Does it matter? According to RFC 2616, sec. 7: "Request and Response messages MAY transfer an entity if not otherwise restricted by the request method or response status code." A 204 response, then, may not have an entity but a 410 response certainly can. If I'm transferring an entity as part of a response, then I can give that entity a tag. The definition of "entity" in sec. 1.3 is: "The information transferred as the payload of a request or response. An entity consists of metainformation in the form of entity-header fields and content in the form of an entity-body, as described in section 7." My 410 Gone response has a payload, or entity, which consists of both entity-headers (like ETag) and an entity-body (an HTML message). You're not convincing me this goes against RFC 2616. -Eric
Eric J. Bowman wrote: > >Speaking just for me; I really don't care what Rest says here. What's > >interesting to me is whether this is compliant to RFC2616, > > > > I'm not sure how you're getting from this... > > "The response-header fields allow the server to pass additional > information about the response which cannot be placed in the > Status-Line. These header fields give information about the server and > about further access to the resource identified by the Request-URI. " > > ...to "4xx responses are not allowed to have ETags", I'm just not seeing > that in there at all. A 410 response with an entity body, can obviously > have an entity tag. Any entity body, can have an entity tag, according > to the spec. It is "information about the response" to the request. An entity tag response header gives information about an entity available on the Request-URI. A 404/410 status indicates there are none. > > > >I guess we'll continue to disagree whether a 404/410 response is a > >representation of a resource. As far as I am concerned, it is not. It's > >the way how the server states that there is no representation it can > >send, so an ETag on that message is meaningless. > > > > Does it matter? According to RFC 2616, sec. 7: > > "Request and Response messages MAY transfer an entity if not otherwise > restricted by the request method or response status code." > > A 204 response, then, may not have an entity but a 410 response certainly > can. If I'm transferring an entity as part of a response, then I can give > that entity a tag. The definition of "entity" in sec. 1.3 is: Yes, it carries an entity. But no, the ETag header does not apply to the entity being returned, but to the entity available on the server. > "The information transferred as the payload of a request or response. An > entity consists of metainformation in the form of entity-header fields and > content in the form of an entity-body, as described in section 7." ETag is not entity header field but a response header field, so the information above does not apply (see Section 6.2 vs 7.1). > My 410 Gone response has a payload, or entity, which consists of both > entity-headers (like ETag) and an entity-body (an HTML message). You're > not convincing me this goes against RFC 2616. Maybe I did now. Best regards, Julian
On 7/11/07, Julian Reschke <julian.reschke@...> wrote: > > "The information transferred as the payload of a request or response. An > > entity consists of metainformation in the form of entity-header fields and > > content in the form of an entity-body, as described in section 7." > > ETag is not entity header field but a response header field, so the > information above does not apply (see Section 6.2 vs 7.1). +1 Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
> >An entity tag response header gives information about an entity >available on the Request-URI. A 404/410 status indicates there are none. > An entity tag response header gives information about the entity enclosed with the response to the request, whatever that response may be. A 404 or 410 response does not mean there are no representations available for the requested resource -- they _are_ representations of the requested resource, according to REST. > >Yes, it carries an entity. But no, the ETag header does not apply to the >entity being returned, but to the entity available on the server. > Huh? The server just returned the entity available on the server. A 404 or a 410 response includes an entity, in response to a request for the resource. Perhaps they are representations of a "null" resource? > >ETag is not entity header field but a response header field, so the >information above does not apply (see Section 6.2 vs 7.1). > Huh? An ETag is both an entity header field and a response header. How can an entity tag _not_ be a response header, when included with a response to a request? > >Maybe I did now. > Sorry, no sale. Maybe you can explain just what REST constraint is broken by applying an ETag to a 410 response, or perhaps explain just what it is that can go wrong. If there is some actual downside to this, I would be more receptive of your arguments. But if it doesn't make any "bad things" happen, wouldn't that tend to support my argument? I tend towards pragmatism, when I come up with something and it works as expected, and I can't find any negative effects, I tend not to believe that I've broken any REST constraints or violated RFC 2616. When I implement something and "bad things" do happen, then I stop myself and attempt to discern, through both RFC 2616 and REST, where it is I went wrong and why. Like my first shot at the double-DELETE, I discovered problems through testing, which led me to identify the error I was making in both REST and RFC 2616 terms. Now I've fixed it, and believe it or not, putting an ETag on a 410 response doesn't cause the sky to fall. ;-) If there is some obscure, theoretical error remaining in the interaction that causes no ill effects then I question the theory, not my working code. -Eric
On 7/11/07, Eric J. Bowman <eric@...> wrote: > An entity tag response header gives information about the entity enclosed > with the response to the request, whatever that response may be. A 404 > or 410 response does not mean there are no representations available > for the requested resource -- they _are_ representations of the requested > resource, according to REST. No, they're representations of the state of the server when asked for a representation of the targetted resource. If they were representations of the actual state of the resource, the status code would have been 2xx. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Eric J. Bowman wrote: > > > > > >An entity tag response header gives information about an entity > >available on the Request-URI. A 404/410 status indicates there are none. > > > > An entity tag response header gives information about the entity enclosed > with the response to the request, whatever that response may be. A 404 Nope. As Mark said. > or 410 response does not mean there are no representations available > for the requested resource -- they _are_ representations of the requested > resource, according to REST. At this point, I really don't care about REST. REST is an interesting topic, but it doesn't override what RFC2616 says. > >Yes, it carries an entity. But no, the ETag header does not apply to the > >entity being returned, but to the entity available on the server. > > > > Huh? The server just returned the entity available on the server. A > 404 or a 410 response includes an entity, in response to a request for > the resource. Perhaps they are representations of a "null" resource? In general they are not representations of *any* resource (at least not one with a URI known by the client). There are just messages. > >ETag is not entity header field but a response header field, so the > >information above does not apply (see Section 6.2 vs 7.1). > > > > Huh? An ETag is both an entity header field and a response header. Eric, please read the definitions in RFC2616. Not every header appearing in an HTTP response is a "Response Header", as defined in Section 6.2. >... > Sorry, no sale. Maybe you can explain just what REST constraint is > broken by applying an ETag to a 410 response, or perhaps explain just > what it is that can go wrong. If there is some actual downside to Again, all I'm concerned with is how HTTP is used here. I don't care about REST concepts right now. > this, I would be more receptive of your arguments. But if it doesn't > make any "bad things" happen, wouldn't that tend to support my argument? No, it just means that you are using HTTP machinery in a way not sanctioned by the spec, so basically you're inventing a new protocol. > I tend towards pragmatism, when I come up with something and it works as > expected, and I can't find any negative effects, I tend not to believe > that I've broken any REST constraints or violated RFC 2616. Interesting approach. > When I implement something and "bad things" do happen, then I stop > myself and attempt to discern, through both RFC 2616 and REST, where it > is I went wrong and why. Like my first shot at the double-DELETE, I > discovered problems through testing, which led me to identify the error > I was making in both REST and RFC 2616 terms. Now I've fixed it, and > believe it or not, putting an ETag on a 410 response doesn't cause the > sky to fall. ;-) If there is some obscure, theoretical error > remaining in the interaction that causes no ill effects then I question > the theory, not my working code. It's just something that is not compliant to RFC2616. You won't notice any problems as long as you're just using your own client, no intermediaries, and so on. Best regards, Julian
A. Pagaltzis wrote: > Considering that you were only just fervently talking about how > database people tend to see tables everywhere, it surprises me > that you go on to say to XML should be your hammer and every > problem a nail. Funny. I thought I was quite careful to say the opposite. XML most deifnitely doesn;t do everything, but what it can't do (with a couple of interesting if esoteric exceptions) JSON can't do either. I remain unimpressed by the claims people make that JSON is easier. I've worked with both, and it doesn't seem so to me. JSON may be easier than DOM, but it is not easier than XML. The claims of JSON's superiority seem mostly based on confusing XML with DOM on the one hand, while refusing to learn what XML actually is. Developers like to be able to just shove JSON into JavaScript and not leanr anything new. Of course, evaling JSON as JavaScript doesn't actually work, but most developers haven't noticed that yet. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 7/11/07, Elliotte Harold <elharo@...> wrote: > > A. Pagaltzis wrote: > > > Considering that you were only just fervently talking about how > > database people tend to see tables everywhere, it surprises me > > that you go on to say to XML should be your hammer and every > > problem a nail. > > Funny. I thought I was quite careful to say the opposite. XML most > deifnitely doesn;t do everything, but what it can't do (with a couple of > interesting if esoteric exceptions) JSON can't do either. > > I remain unimpressed by the claims people make that JSON is easier. I've > worked with both, and it doesn't seem so to me. JSON may be easier than > DOM, but it is not easier than XML. XML is an incredibly crappy way to write hashtables and arrays. JSON is good at that. Lots of people want a way to pass hashtables and arrays with unicode keys and values, and be able to squeeze unicode into ascii (an undervalued ability of both JSON and XML). It is much easier to include Web content in JSON text, because ampersands and angle brackets are not syntactically significant. XML's superficial resemblance to HTML ends up hurting it. > > The claims of JSON's superiority seem mostly based on confusing XML with > DOM on the one hand, while refusing to learn what XML actually is. > Developers like to be able to just shove JSON into JavaScript and not > leanr anything new. Of course, evaling JSON as JavaScript doesn't > actually work, but most developers haven't noticed that yet. I don't think these claims can be seriously answered unless some concrete examples are given. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
Robert Sayre wrote: >> >> The claims of JSON's superiority seem mostly based on confusing XML with >> DOM on the one hand, while refusing to learn what XML actually is. >> Developers like to be able to just shove JSON into JavaScript and not >> leanr anything new. Of course, evaling JSON as JavaScript doesn't >> actually work, but most developers haven't noticed that yet. > > I don't think these claims can be seriously answered unless some > concrete examples are given. > http://www.google.com/search?btnG=Google+Search&q=json+security -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 7/11/07, Elliotte Harold <elharo@...> wrote: > Robert Sayre wrote: > > >> > >> The claims of JSON's superiority seem mostly based on confusing XML with > >> DOM on the one hand, while refusing to learn what XML actually is. > >> Developers like to be able to just shove JSON into JavaScript and not > >> leanr anything new. Of course, evaling JSON as JavaScript doesn't > >> actually work, but most developers haven't noticed that yet. > > > > I don't think these claims can be seriously answered unless some > > concrete examples are given. > > > > http://www.google.com/search?btnG=Google+Search&q=json+security Can you be more specific? It looks like you're referring to the XSS attacks on JSON that resulted in information disclosure... of course, these were services that were "secured" by cookies. The possibility of executing JSON with a script tag does make some attacks easier, but it doesn't make anything new possible. There are a variety of technologies available that will preserve confidentiality and prevent replay attacks. The payload format doesn't matter that much. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
* Eric J. Bowman <eric@...> [2007-07-04 03:40]:
> >No it isn't. Stored does not "very unambiguously" mean
> >"replaced". With such abstract terms ("store", "replace",
> >"modify", "update" -- elsewhere in the spec, PUT is referred
> >to as an "updating" method), only "replaced" very
> >unambiguously means "replaced".
>
> There's some disagreement here, which stems from the wording of
> the RFC. But HTTP is not REST,
HTTP is a protocol for REST applications.
> if we are discussing the semantics of PUT in REST terms
> (generic interface) then store means replace in RFC 2616 just
> like STOR means replace in RFC 765.
A generic interface can be anything. A “uniform interface” has no
particular properties other than being uniform. HTTP provides the
means to implement *a* particular uniform interface. The
specifics of the uniform interface defined by HTTP are described
in RFC 2616 and nowhere else.
> "REST does not restrict communication to a particular protocol,
> but it does constrain the interface between components, and
> hence the scope of interaction and implementation assumptions
> that might otherwise be made between components. For example,
> the Web's primary transfer protocol is HTTP, but the
> architecture also includes seamless access to resources that
> originate on pre-existing network servers, including FTP,
> Gopher, and WAIS. Interaction with those services is restricted
> to the semantics of a REST connector."
This is exactly what I’m saying.
> This tells me that a REST connector must understand that the
> semantics of GET equal the semantics of RETR, APPE=POST,
> DELE=DELETE, LIST=OPTIONS and STOR=PUT in order to meet the
> Uniform Interface constraint.
Err. Yes, that is true if you are implementing a RESTful bridge
to old protocols like FTP. It is not a universal truth about
REST. Sorry.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
* Elliotte Harold <elharo@...> [2007-07-12 00:00]: > I remain unimpressed by the claims people make that JSON is > easier. I've worked with both, and it doesn't seem so to me. > JSON may be easier than DOM, but it is not easier than XML. JSON is much simpler than the XML Infoset. Are you willing to argue about that? The XML API in use is a red herring. Yes, if you try to do XMLish things with JSON it won’t be easier. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi Nick, * Nick Gall <nick.gall@...> [2007-07-02 07:15]: > --- In rest-discuss@yahoogroups.com, "A. Pagaltzis" <pagaltzis@...> wrote: > >And with that we return to PUT: RFC 2616 is perfectly clear > >that by using PUT, the client means that omitted parts of the > >entity are to be removed. This is 100% unambiguous. By PUT the > >client means "replace." > > 100% unambiguous?! This is what I'm just not seeing. Where in > section 9.6does it unambiguously say "replace" or "omitted > parts of the entity are to be removed"? All I see are: > > 1. "be stored" 2. "considered a modified version" 3. > "existing resource is modified" > > Neither of which means "replace" 100% unambiguously. There's > plenty of ambiguity in both phrases. (BTW, even "replace" is > still somewhat ambiguous, since it could mean "partial > replacement" or "complete replacement".) If the spec meant to > be 100% unambiguous, why didn't it just say, "If the > Request-URI refers to an already existing resource, the > enclosed entity SHOULD be considered [a complete replacement] > version of the one residing on the origin server." Now THAT'S > unambiguous. As Robert Sayre would say, that’s why specs today are so much harder to read than 10 years ago. > By no stretch of the imagination does "modified version" > unambiguously mean "complete replacement version". It may be useful to remember that this discussion is about why PATCH must be a method separate from PUT, and to then consider what the wording would look like if you were writing this sentence in relation to PATCH. For the purposes of PATCH vs PUT, the two wordings you consider polar opposites are almost indistinguishable. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Julian Reschke <julian.reschke@...> [2007-07-11 14:00]: > You won't notice any problems as long as you're just using your > own client, no intermediaries, and so on. The same can be said of RPC and WS-*, of course. Maybe that says something. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Robert Sayre wrote:
> XML is an incredibly crappy way to write hashtables and arrays. JSON
> is good at that. Lots of people want a way to pass hashtables and
> arrays with unicode keys and values, and be able to squeeze unicode
> into ascii (an undervalued ability of both JSON and XML).
Hahstables and arrays are both easy:
<hashtable>
<entry>
<key>foo</key>
<value>bar</value>
<entry>
</hashtable>
<array>
<entry>1<entry>
<entry>45<entry>
<entry>foo<entry>
<entry>17.6<entry>
</array>
You can be less verbose or more typed if you like, but to my way of
thinking verbosity is a feature and data typing is a bug.
Of course most developer use XML for more complex problems than that.
> It is much
> easier to include Web content in JSON text, because ampersands and
> angle brackets are not syntactically significant. XML's superficial
> resemblance to HTML ends up hurting it.
>
Again, this is true only for developers who don't understand XML and
don't want to learn. HTML is incredibly easy to squeeze into XML, far
eaiser than it is to squeeze into JSON, but you have to be willing to
treat it as well-formed markup, not a random string. Doing so makes it
much easier to process in various ways with various tools. HTML as a
string can;t go beyond document.write. :-(
--
Elliotte Rusty Harold elharo@...
Java I/O 2nd Edition Just Published!
http://www.cafeaulait.org/books/javaio2/
http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Robert Sayre wrote: > Can you be more specific? It looks like you're referring to the XSS > attacks on JSON that resulted in information disclosure... of course, > these were services that were "secured" by cookies. XSS is only one problem. There's a much more fundamental one with evaling JSON as JavaScript. Shipping executable code around and then evaluating it without security checks is dangerous. Again, the Google search I recommended will lead you to plenty of deatils on these attacks if you're interested. > The possibility of executing JSON with a script tag does make some > attacks easier, but it doesn't make anything new possible. There are a > variety of technologies available that will preserve confidentiality > and prevent replay attacks. The payload format doesn't matter that > much. However the parser matters a great deal. Of course you can use a real parser on JSON, but many programs don't. That JSON can be and is treated as executable code is a major flaw in its design, one XML doesn't share. JSON, by design, encourages poor security and a variety of attacks. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
* Elliotte Harold <elharo@...> [2007-07-12 11:10]:
> Robert Sayre wrote:
> > XML is an incredibly crappy way to write hashtables and
> > arrays. JSON is good at that. Lots of people want a way to
> > pass hashtables and arrays with unicode keys and values, and
> > be able to squeeze unicode into ascii (an undervalued ability
> > of both JSON and XML).
>
> Hahstables and arrays are both easy:
>
> <hashtable>
> <entry>
> <key>foo</key>
> <value>bar</value>
> <entry>
> </hashtable>
The problem with that is the mindboggling amount of indirection
you have introduced. You have an element containing whitespace
text nodes and elements, and each subelement in turn contains
whitespace text nodes and two elements, each of which contains
a text node.
By comparison:
{ 'foo': 'bar' }
That’s a hash containing a key with a string value.
It’s called the XML Infoset.
And on top of that you have lots of people using a wide variety
of markup layouts to express a hash table. There’s only one
reasonable way to write a hash table in JSON and everyone who is
reasonable uses that.
Another issue is that the structure you chose does not enforce
uniqueness of hash keys at the parser level, which the JSON
equivalent does. If you wanted to do that with XML, you would
have to express the keys as attributes with arbitrary names on a
single element. But attribute values can only be text nodes, so
you need further complication to make it work for arbitrarily
nested structures. Off the top of my head this leads to a design
vaguely like this:
<hashtable>
<keys foo="" baz=""/>
<value for="foo">bar</value>
<value for="baz">quux</value>
</hashtable>
If that’s not horrible XML, I dunno what is.
And it’s harder to validate than your vocabulary and easier to
make mistakes with; both due to co-constraints.
And the Infoset for that is STILL a lot more convoluted than the
data model of a JSON hash table.
These things aren’t matters of opinion; the XML Infoset is
objectively far more complicated than the JSON data model and
does not admit direct mapping of data structures without a
vocabulary and a bunch of local conventions that have various
trade-offs depending on how you choose them.
(Eg. maybe you’d prefer to omit the `keys` element and put the
attributes on the `hashtable` element. Maybe for simple values
you’d prefer to put them right in the attribute, and use the
reference only when the value is compound. Maybe you’d pick the
design you suggested. Someone else would do something else
completely.)
As far as I’m concerned, you can reasonably argue about the value
of the problem domain that JSON maps out, and you can reasonably
argue about whether the things people use JSON for are broader
than that problem domain and more amenable to XML.
But you cannot straight-facedly argue that the Infoset maps
better or even just as well to JSON’s domain as JSON. Not by
a very long stretch.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote:
> The problem with that is the mindboggling amount of indirection
> you have introduced. You have an element containing whitespace
> text nodes and elements, and each subelement in turn contains
> whitespace text nodes and two elements, each of which contains
> a text node.
Then problem being what? This is easy to suck into a Hashtable API if
you like. The issue is the APIs, not the markup.
JSON is a clever kludge designed to work within the limitations of
JavaScript 1.0 and browser APIs. However if E4X had been reliably
available across browsers it would never have been necessary, and
probably wouldn't have been invented.
> By comparison:
>
> { 'foo': 'bar' }
>
> That’s a hash containing a key with a string value.
>
> It’s called the XML Infoset.
>
That is *not* the only or the required model for parsing XML syntax. If
you want something else, use something else.
Infoset != XML
> Another issue is that the structure you chose does not enforce
> uniqueness of hash keys at the parser level, which the JSON
> equivalent does.
That's a feature, not a bug. XML lets duplicate keys be expressed if
that's what the data requires. XML does not impose semantics onto the
data. To the extent JSON does, that's a bug, not a feature.
> These things aren’t matters of opinion; the XML Infoset is
> objectively far more complicated than the JSON data model and
> does not admit direct mapping of data structures without a
> vocabulary and a bunch of local conventions that have various
> trade-offs depending on how you choose them
>
> But you cannot straight-facedly argue that the Infoset maps
> better or even just as well to JSON’s domain as JSON. Not by
> a very long stretch.
You brought up the Infoset, not me. Please don't put ugly words like
"Infoset" in my mouth. I'm talking about XML, not the Infoset.
Until you understand the difference between XML and the Infoset, you
won't understand why JSON is inferior. The Infoset is *a* data model,
not the *the* data model. You are absolutely free to choose other data
models for XML if it helps you to do so, including ones that ignore
white space, barf on duplicate keys, or whatever you find helpful. And
I'm free to use a completely different data model *for the same
document*. The lack of a data model is precisely what makes XML so
powerful.
XML is about syntax, not semantics. Every semantic JSON adds makes it
weaker and less suitable for network interchange of data and data
modeling.
--
Elliotte Rusty Harold elharo@...
Java I/O 2nd Edition Just Published!
http://www.cafeaulait.org/books/javaio2/
http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
I hate to add to this thread, which is probably off-topic for REST-discuss anyhow, but I just ran into a practical example of some of the tradeoffs between XML and json: the Google Maps GeoCoder API. http://www.google.com/apis/maps/documentation/index.html#Geocoding_Structured It returns address info using json, but the json is based on xAL (eXtensible Address Language) http://www.oasis-open.org/committees/ciq/ciq.html#6 which of course is XML. Unfortunately, xAL can result in differing json structures for different addresses, so for example to get the street address in one location you might say: Country.AdministrativeArea.SubAdministrativeArea.Locality.Thoroughfare.ThoroughfareName and in another you might say: Country.AdministrativeArea.SubAdministrativeArea.Locality.DependentLocality.Thoroughfare.ThoroughfareName So it seems like json binds client and server together to the extent that the client needs to know the quirks of the json structure pretty intimately... ...*Unless* there's something like XPath for json. I looked, but the only thing I found was http://www.jspath.com/ - which does not appear to exist, altho it is promised by http://www.json.com/2007/03/14/xml-is-dead-long-live-xml/
Elliotte Harold wrote: > A. Pagaltzis wrote: > >> The problem with that is the mindboggling amount of indirection >> you have introduced. You have an element containing whitespace >> text nodes and elements, and each subelement in turn contains >> whitespace text nodes and two elements, each of which contains >> a text node. > > Then problem being what? This is easy to suck into a Hashtable API if > you like. The issue is the APIs, not the markup. > > JSON is a clever kludge designed to work within the limitations of > JavaScript 1.0 and browser APIs. However if E4X had been reliably > available across browsers it would never have been necessary, and > probably wouldn't have been invented. JS 1.2 actually, not that it matters a whit. Sorry guys, but this back and forwards between the two of you reads like as if you're both arguing over whether a Toyota Corolla or a DAF 95XF is the better vehicle. Both JSON and XML serve different purposes, the former being a simple way to serialise common datastructures, and the latter being a general-purpose markup language. Sure, their domains overlap, but they're optimised for different things. Sure, you can use one to do things the other does, but it's neither natural nor comfortable. Similarly, while you _could_ use an 95XF as a family car, you'd look like an even bigger ass than a Humvee driver, and while you could _try_ pulling a lorry trailer with a Corolla, you wouldn't get very far. Seriously, why not argue about something more useful, bikeshed colours, for instance. K. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
On 7/12/07, Elliotte Harold <elharo@...> wrote: > > > > > > > Robert Sayre wrote: > > > Can you be more specific? It looks like you're referring to the XSS > > attacks on JSON that resulted in information disclosure... of course, > > these were services that were "secured" by cookies. > > XSS is only one problem. There's a much more fundamental one with > evaling JSON as JavaScript. Shipping executable code around and then > evaluating it without security checks is dangerous. Again, the Google > search I recommended will lead you to plenty of deatils on these attacks > if you're interested. Of, you mean calling eval? Yes, executing arbitrary code is bad for security. Most libraries check the message against a regex. A dedicated parseJSON function would be better, but the regex seems to work. Have a look at the one in json.js, and please report any new flaws you find. :) > > > The possibility of executing JSON with a script tag does make some > > attacks easier, but it doesn't make anything new possible. There are a > > variety of technologies available that will preserve confidentiality > > and prevent replay attacks. The payload format doesn't matter that > > much. > > However the parser matters a great deal. Of course you can use a real > parser on JSON, but many programs don't. That JSON can be and is treated > as executable code is a major flaw in its design, one XML doesn't share. XML is treated as executable code all the time. > JSON, by design, encourages poor security and a variety of attacks. > I don't think you've backed this up. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
On 7/12/07, Elliotte Harold <elharo@...> wrote: > > XML is about syntax, not semantics. Every semantic JSON adds makes it > weaker and less suitable for network interchange of data and data > modeling. For me, this paragraph is information-free. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
On 7/12/07, Keith Gaughan <keith@...> wrote: > Seriously, why not argue about something more useful, bikeshed colours, for > instance. See, I was thinking "vi vs. emacs," myself. Ever the nonconformist, I use pico. And nano, when I really need advanced features. *And* I drive a Toyota that probably could pull a lorry trailer, at least back in the day. It's twenty years old now... a year newer than this, and without the girly-man automatic tranny: http://www.youtube.com/watch?v=0uhSyaUYT3s There, I think we're sufficiently off the rails now.
Robert Sayre wrote: > > However the parser matters a great deal. Of course you can use a real > > parser on JSON, but many programs don't. That JSON can be and is treated > > as executable code is a major flaw in its design, one XML doesn't share. > > XML is treated as executable code all the time. Certainly XSLT is. And arguably Ant buildfiles, although I haven't seen many of those used as components in webtwenny mashups. -- Chris Burdess
Ok. So this discussion is revealing a lot of confusion between syntax
and semantics!
Please take a good look at the graph here, to get a simple picture of
syntax and semantics:
http://blogs.sun.com/bblfish/resource/RDF-syntax-semantics.png
How you can combine string tokens together to form sentences, or
documents is syntax. (that's the upper row of the graph) How those
tokens relate to something else (the lower row of the graph), which
one can think of as things in the world if it helps, is semantics.
(those are the downward arrows, illustrating the reference
relationship) Without semantics you can not speak of the truth of a
sentence. Without semantics you can not make sense of the idea of
refactoring your data [1], because refactoring is preserving the
meaning of what is said, whilst changing the way it is said.
Both xml and json are probably equally complex if you look at them
syntactically. json also can have a huge amount of white space
everywhere, between brackets, lines etc... just like xml. It is just
that json comes with a predefined semantics, which is why it seems
simpler, because you immediately know how to interpret it. Where the
json syntax is simpler, is probably where it is less flexible than
xml...
XML comes with no default semantics. Except of course that everyone
imagines they see one, because it is so difficult for humans not to
interpret text they see. You can think of the structure of the xml
document, but that is not really the semantics of it. That's just an
objectification of the syntax. DOM (and I think the infoset) are
just objectifications of the syntax. They allow one to walk the
objectified syntax of the document.
Now as I mentioned earlier, the most powerful semantics that has been
built on xml for data exchange is RDF. It has the advantage over json
that it has URI namespaces, and complements the syntax of xml. It has
full model theoretical backing on top of it, has been built with the
open world assumption in mind, which is what we have to deal with,
when working on the web, and is not so difficult to learn if you use
notations such as N3. Currently the difficulty with these tools are
those related to tools that on the leading/bleeding edge, not
problems of their essential nature.
I think Tim Berners Lee's team have developed some JavaScript
libraries to parse rdf/xml by the way [2]. So this should help
simplify things somewhat. I am sure with a little effort one could
make things even better.
Henry
[1] http://blogs.sun.com/bblfish/entry/refactoring_xml
[2] http://dig.csail.mit.edu/2005/ajar/ajaw/Developer.html
On 12 Jul 2007, at 12:42, A. Pagaltzis wrote:
> * Elliotte Harold <elharo@...> [2007-07-12 11:10]:
> > Hahstables and arrays are both easy:
> >
> > <hashtable>
> > <entry>
> > <key>foo</key>
> > <value>bar</value>
> > <entry>
> > </hashtable>
>
> The problem with that is the mindboggling amount of indirection
> you have introduced. You have an element containing whitespace
> text nodes and elements, and each subelement in turn contains
> whitespace text nodes and two elements, each of which contains
> a text node.
>
> By comparison:
>
> { 'foo': 'bar' }
>
> Thats a hash containing a key with a string value.
>
> Its called the XML Infoset.
>
The project I'm working on deals heavily with nested resources... five levels, at one point (six, I suppose, if you count the site itself). As I've mentioned, I'm a crazy person, and I expect to be able to use the "service" as a "site" as well... and the real "content" starts about four levels down. Pick a resource off the top-level page, narrow it on the next, narrow it on the next, and finally you start to see something. This is less than ideal, obviously - clients of the human variety are liable to lose interest well before then, among other things. So my question is, is there anything unRESTful about having a sort of meta-resource? That is, say you offer a link to what is structured mostly like a level-3 resource, but it actually contains *all* the level-4 resources, and no specific level-3 (or perhaps all the level-4's that belong to all the level-3's that belong to a specified level-2). Except in cases where there's too much content (and thus require pagination), there aren't going to be any query strings or anything else search-engine-unfriendly. Is there anything I'm missing?
Karen wrote: > So my question is, is there anything unRESTful about having a sort of > meta-resource? That is, say you offer a link to what is structured > mostly like a level-3 resource, but it actually contains *all* the > level-4 resources, and no specific level-3 (or perhaps all the > level-4's that belong to all the level-3's that belong to a specified > level-2). In REST resources don't contain other resources, they link to them. Now, that linkage could well reflect the sort of containership you're talking about in your first description, but it could just as well reflect the second description.
Jon Hanna wrote: > In REST resources don't contain other resources, they link to them. Atom feeds can (and typically do) contain a representation of nested resources; ie, the entries. Those representations are good enough to keep me from having to actually chase the link, in most cases; ie, I rarely go to the /. site anymore, I just read the feed in my feed reader. It's an interesting problem; sometimes you DO want to contain the other resource. I think the primary reason for inclusion is cutting down on network round-trips. Which is a pretty good reason. -- Patrick Mueller http://muellerware.org
On 7/12/07, Jon Hanna <jon@...> wrote: > In REST resources don't contain other resources, they link to them. Hmm. Can they not do both? (Semantically, anyway.) Say your hierarchy is author/book/chapter/page, where the page contains text that is what your reader is going to think of as the "real" content. What's the best way to provide a reader with a method of getting the full text of a given chapter, or book? The way I'm thinking a human would navigate it is something like this: Human: http://library/. Server: (list o' authors) Human: http://library/author/samandleo Server: (list o' books) Human: http://library/book/rws Server: (list o' chapters) Human: yawn http://library/chapter/rws/001 Server: (list o' pages) Human: http://library/pages/rws/001/0001 Server: (text) Human: ...has wandered off to check email by now. What if the list o' books also includes links to http://library/pages/rws/001/all or some such, where you get a collection of multiple pages (and links to those individual pages as individual resources)? Is that a weird thing to do? And if so, would it be better to do something like http://library/book/rws?includealltext instead? (With a nofollow, and/or not included in the machine-only/non-HTML renditions, since it's all duplicated material search engines and other machines can find via other, canonical links) Or should it be a whole 'nother type of resource?
* Keith Gaughan <keith@...> [2007-07-12 14:40]: > Sorry guys, but this back and forwards between the two of you > reads like as if you're both arguing over whether a Toyota > Corolla or a DAF 95XF is the better vehicle. > > Both JSON and XML serve different purposes, the former being a > simple way to serialise common datastructures, and the latter > being a general-purpose markup language. That *is* what I am saying. JSON is good for common data structures, and XML is good for documents. I am not saying JSON is good at everything, at all. It’s Elliotte who is saying that XML is suitable for everything that JSON is suitable for. > Sure, their domains overlap, but they're optimised for > different things. Sure, you can use one to do things the other > does, but it's neither natural nor comfortable. Similarly, > while you _could_ use an 95XF as a family car, you'd look like > an even bigger ass than a Humvee driver, and while you could > _try_ pulling a lorry trailer with a Corolla, you wouldn't get > very far. Exactly. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Elliotte Harold wrote: > JSON is a clever kludge designed to work within the limitations of > JavaScript 1.0 and browser APIs. However if E4X had been reliably > available across browsers it would never have been necessary, and > probably wouldn't have been invented. Well, in fact, JSON is really just JavaScript literal syntax, co-opted for wider usage. I've never been a fan of JSON because you can (unsafely) eval() it in your JS code. I've been a fan of it because it makes it easy to express arrays, maps, and other data-y things in a tidy fashion. -- Patrick Mueller http://muellerware.org
> >In general they are not representations of *any* resource (at least not >one with a URI known by the client). There are just messages. > A 404 or a 410 message says that the requested URI maps to the empty set: "A resource can map to the empty set, which allows references to be made to a concept before any realization of that concept exists" They are "just messages" but those messages contain entity bodies, and I do not see where the spec says an entity body (no restriction is made to 2xx responses in the spec) can not have an entity tag. > >> Huh? An ETag is both an entity header field and a response header. > >Eric, please read the definitions in RFC2616. Not every header appearing >in an HTTP response is a "Response Header", as defined in Section 6.2. > I don't recall making that claim -- obviously, "Server" and "Date" have nothing to do with the response. Section 6.2 lists ETag as a response header. So how is it not a response header? Are you saying that since RFC 2616 doesn't specifically state that ETags may be used on 4xx responses, then doing so goes against spec? Well, I disagree. > >No, it just means that you are using HTTP machinery in a way not >sanctioned by the spec, so basically you're inventing a new protocol. > I'm using HTTP machinery in a way nobody has tried before, this is not the same thing as "inventing a new protocol" nor is it "against" RFC 2616. If it were a "bad thing" to send an ETag with a 410 response then the spec would say so. If the spec had to specifically account for all possible request/response combinations allowable, wouldn't it be literally thousands of pages long? Just because an ETag on a 410 isn't directly "ruled in" hardly makes it "ruled out". > >It's just something that is not compliant to RFC2616. You won't notice >any problems as long as you're just using your own client, no >intermediaries, and so on. > That is exactly my concern, and remains my challenge. Just exactly what client will have problems with my setup? Just exactly why would an intermediary get all fouled up due to an ETag on a 410 response? Granted, a client needs to understand my API to make the second DELETE, but that hardly restricts usage to a "custom" client -- I can make an XForms page which uses XHR to implement this, meaning any HTTP client with an understanding of XForms (and XHTML) and XHR will interoperate with my system cleanly. Again I ask, what, specifically, will happen to trip up any intermediary and what, exactly, causes my system not to interoperate with a wide variety of clients? If I have truly violated RFC 2616, then surely you can come up with an actual downside instead of FUD? Be specific, please, otherwise the only reason you are giving me is "because I say so" and not anything to do with RFC 2616, really. -Eric
* Elliotte Harold <elharo@...> [2007-07-12 13:30]:
> A. Pagaltzis wrote:
> > Another issue is that the structure you chose does not
> > enforce uniqueness of hash keys at the parser level, which
> > the JSON equivalent does.
>
> That's a feature, not a bug. XML lets duplicate keys be
> expressed if that's what the data requires. XML does not impose
> semantics onto the data. To the extent JSON does, that's a bug,
> not a feature.
That’s not a feature, it’s a waste of time.
When I reach for JSON I want to serialise a data structure.
I have absolutely zero interest in modelling the data; I have
already done that in terms of the data types of the language I
use, and all I want is to get a my data into an octet stream and
back out of it as trivially as possible.
Impendance mismatch with my language’s data model is not a
feature, it’s a liability. If hash maps in my language can only
have unique keys, I want a format that enforces this constraint
at the parser level, so that ill-formed messages are defined out
of existence, freeing me from ever having to deal with them at a
higher level in the application.
XML making me think about “what the data requires”: sorry, I have
no use for that when I want to serialise a data structure.
And in closing,
[
{ "foo": "bar" },
{ "foo": "baz" },
{ "foo": "quux" },
]
> The lack of a data model is precisely what makes XML so
> powerful.
Exactly. XML gives me lots of freedom, so I end up wasting time
inventing a vocabulary for “arrays, hashmaps and atomic values”,
even though this particular data model is extremely common. JSON
eschews all flexibility in order to provide me with a turnkey
solution for this singular need.
Yet another demonstration of the principle of least power and the
power of constraints.
The broad domain of XML makes it more useful than JSON in the
broad domain. The narrow domain of JSON makes it more useful than
XML in the narrow domain.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
Eric J. Bowman wrote: > > > > > >In general they are not representations of *any* resource (at least not > >one with a URI known by the client). There are just messages. > > > > A 404 or a 410 message says that the requested URI maps to the empty set: > > "A resource can map to the empty set, which allows references to be made > to a concept before any realization of that concept exists" Yes. > They are "just messages" but those messages contain entity bodies, and I > do not see where the spec says an entity body (no restriction is made to > 2xx responses in the spec) can not have an entity tag. Yes, but the definition of ETag as "Response Header", as defined in <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.6.2> makes that restriction. When you return a 201 status with an ETag, is the ETag for the entity that just was stored, or for the message telling the client that a resource was created? See. > >> Huh? An ETag is both an entity header field and a response header. > > > >Eric, please read the definitions in RFC2616. Not every header appearing > >in an HTTP response is a "Response Header", as defined in Section 6.2. > > > > I don't recall making that claim -- obviously, "Server" and "Date" have > nothing to do with the response. Section 6.2 lists ETag as a response > header. So how is it not a response header? Are you saying that since It is a response header, but not an entity header. Sorry for the confusion. > RFC 2616 doesn't specifically state that ETags may be used on 4xx > responses, then doing so goes against spec? Well, I disagree. They don't make sense upon 404/410 because of their definition. > >No, it just means that you are using HTTP machinery in a way not > >sanctioned by the spec, so basically you're inventing a new protocol. > > > > I'm using HTTP machinery in a way nobody has tried before, this is not > the same thing as "inventing a new protocol" nor is it "against" RFC > 2616. If it were a "bad thing" to send an ETag with a 410 response > then the spec would say so. If the spec had to specifically account No, that's not how specs are written. But maybe now we'll have to put it into RFC2616bis :-) > for all possible request/response combinations allowable, wouldn't it > be literally thousands of pages long? Just because an ETag on a 410 > isn't directly "ruled in" hardly makes it "ruled out". Returning it with a 404/410 does not make sense, and that's why the spec doesn't *need* to say it. > >It's just something that is not compliant to RFC2616. You won't notice > >any problems as long as you're just using your own client, no > >intermediaries, and so on. > > > > That is exactly my concern, and remains my challenge. Just exactly what > client will have problems with my setup? Just exactly why would an > intermediary get all fouled up due to an ETag on a 410 response? It wouldn't. But a generic client never ever will send a DELETE request to something it already successfully deleted. > Granted, a client needs to understand my API to make the second DELETE, > but that hardly restricts usage to a "custom" client -- I can make an > XForms page which uses XHR to implement this, meaning any HTTP client > with an understanding of XForms (and XHTML) and XHR will interoperate > with my system cleanly. So what if there is a proxy that doesn't even forward the second DELETE because it already knows about the previous DELETE? Or if the XmlHttpRequest object follows the spec and assumes that if a GET/HEAD on a resource once returned a 410, it doesn't make sense to access that URI again? > ... Best regards, Julian
> >> There's some disagreement here, which stems from the wording of >> the RFC. But HTTP is not REST, > >HTTP is a protocol for REST applications. > Following RFC 2616 doesn't begin to guarantee the resulting implementation won't break the constraints of REST. Ergo, HTTP is not REST, nor is REST dependent on HTTP as a protocol (no matter what Microsoft says). > >> if we are discussing the semantics of PUT in REST terms >> (generic interface) then store means replace in RFC 2616 just >> like STOR means replace in RFC 765. > >A generic interface can be anything. A “uniform interface� has no >particular properties other than being uniform. HTTP provides the >means to implement *a* particular uniform interface. The >specifics of the uniform interface defined by HTTP are described >in RFC 2616 and nowhere else. > I think "uniform interface" and "generic interface" mean exactly the same thing, as Roy seems to use them interchangably throughout the thesis. A generic, or uniform, interface follows the principle of generality. Where does HTTP say anything about generic or uniform interfaces? How can connector semantics be "generic" across only one protocol? > >> "REST does not restrict communication to a particular protocol, >> but it does constrain the interface between components, and >> hence the scope of interaction and implementation assumptions >> that might otherwise be made between components. For example, >> the Web's primary transfer protocol is HTTP, but the >> architecture also includes seamless access to resources that >> originate on pre-existing network servers, including FTP, >> Gopher, and WAIS. Interaction with those services is restricted >> to the semantics of a REST connector." > >This is exactly what I'm saying. > You're saying that giving PUT semantics other than "replace" meets the definition of a uniform interface. I'm saying no, if you give PUT any other semantics besides "replace" then the scope of your interactions has broken the constraints of REST because other components assume PUT to have "replace" semantics -- the essence of "self-descriptive messages" is that they are not application- specific or even protocol-specific, but generic. An HTTP PUT request is only self-descriptive if it uses generic "replace" semantics, otherwise it is application-specific and depends on a library API for interoperability between components -- at which point it is no longer self-descriptive. > >> This tells me that a REST connector must understand that the >> semantics of GET equal the semantics of RETR, APPE=POST, >> DELE=DELETE, LIST=OPTIONS and STOR=PUT in order to meet the >> Uniform Interface constraint. > >Err. Yes, that is true if you are implementing a RESTful bridge >to old protocols like FTP. It is not a universal truth about >REST. Sorry. > It doesn't matter whether you're using FTP or not. If you have truly implemented "the semantics of a REST connector" then that connector can seamlessly access an FTP resource using the generic semantics shared between the two protocols. If it cannot, whether you intend it to use FTP or not, then your semantics are not that of a REST connector. So yes, this is a universal truth about REST. A REST connector has a "socket" with the semantics of "replace", each protocol (WAIS, Gopher, FTP, HTTP, presumably waka) has a method which corresponds to that socket. Don't plug HTTP PUT into any socket other than "replace" and don't plug FTP STOR into any socket other than "replace". If you plug one method into multiple sockets, you've broken this constraint. If you plug PUT into a socket with some other semantics (like "merge" or "move") instead of the "replace" socket whose generic semantics are (should be) well understood, your connector becomes application-specific and not a REST connector. I don't care if that PUT is an HTTP PUT or a Web3S PUT, one method must be reserved to implement the generic semantics of "replace" in *any* REST protocol or implementation. -Eric
Bob Haugen wrote: > ...*Unless* there's something like XPath for json. Turns out, there is: it's called JavaScript. Sure, it's more verbose than XPath (but some would argue that's good). Since JSON translates fairly well into other programming language's natural 'array' and 'map' structures, if you're decoding your JSON into 'data' in language X, then language X is your XPath analogue. Your example of JSON-ified xAL is a good example of when NOT to use JSON. Folks who use JSON need to clearly describe the JSON data they are providing (or expecting) from their peer (client or server). It sounds like doing that for xAL would be a bit nightmarish. And not worth the effort. Or, since the xAL usage in the example is just a piece of the otherwise simple structure returned by the geocode API, as a hybrid approach, you might return that xAL XML as a string value of the "AddressDetails" element, instead of JSON-ifying it. See http://www.franklinmint.fm/blog/archives/000965.html -- Patrick Mueller http://muellerware.org
* Eric J. Bowman <eric@...> [2007-07-13 00:15]: > >> There's some disagreement here, which stems from the wording of > >> the RFC. But HTTP is not REST, > > > >HTTP is a protocol for REST applications. > > Following RFC 2616 doesn't begin to guarantee the resulting > implementation won't break the constraints of REST. That’s not the protocol’s job nor is it possible for the protocol to ensure that. > Ergo, HTTP is not REST, Tautological. HTTP is a _protocol_ for REST applications. > nor is REST dependent on HTTP as a protocol (no matter what > Microsoft says). I said HTTP is *a* protocol for REST applications. What were you trying to argue, again? You have not said a single thing that contradicts anything I said. > A REST connector has a "socket" with the semantics of > "replace", each protocol (WAIS, Gopher, FTP, HTTP, presumably > waka) has a method which corresponds to that socket. Don't plug > HTTP PUT into any socket other than "replace" and don't plug > FTP STOR into any socket other than "replace". If you plug one > method into multiple sockets, you've broken this constraint. You are talking about a universal interface, not just a uniform interface. Good luck with that. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> >Yes, but the definition of ETag as "Response Header", as defined in ><http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.6.2> makes >that restriction. > >When you return a 201 status with an ETag, is the ETag for the entity >that just was stored, or for the message telling the client that a >resource was created? > If I PUT a new resource on the server, the 201 response "MAY contain an ETag... for the requested variant just created..." So the returned ETag does indeed match the requested variant. On my server, if I PUT an entity to /foo I would get back a 201 Created response whose message body tells me the following resources have been created: /foo /foo;view=xhtml /foo;view=html /foo;view=xslt /foo;view=text The most-specific match is in the Content-Location header, and the ETag would pertain to the variant in Content-Location if I choose to return an ETag. I see your point, that the ETag in such a case does not pertain to the message body of the response itself, but the ETag does pertain to the "requested variant" and is the ETag I will see if I GET that variant. But it doesn't change my mind, consider: "The ETag response-header field provides the current value of the entity tag for the requested variant." So I still do not see where any restriction exists, which says that if I request a resource that's nonexistent the 410 response can't have an ETag, or how that ETag does not pertain to the requested variant -- or how doing so causes any problems for any clients or intermediaries. The purpose of the spec is to allow interoperability. If my implementation does not cause interoperability problems then I don't see where the spec has been violated, or needs to be changed to disallow what I have implemented. > >> RFC 2616 doesn't specifically state that ETags may be used on 4xx >> responses, then doing so goes against spec? Well, I disagree. > >They don't make sense upon 404/410 because of their definition. > They don't make sense *to you* because of their definition, but it makes sense *to me* as I see nothing in those definitions which precludes doing what I am doing. > >> I'm using HTTP machinery in a way nobody has tried before, this is not >> the same thing as "inventing a new protocol" nor is it "against" RFC >> 2616. If it were a "bad thing" to send an ETag with a 410 response >> then the spec would say so. If the spec had to specifically account > >No, that's not how specs are written. But maybe now we'll have to put it >into RFC2616bis :-) > OK, but only if you can justify why this restriction is needed despite the fact that no interoperability problems result if it is not met. > >Returning it with a 404/410 does not make sense, and that's why the spec >doesn't *need* to say it. > Or, from my point of view, the spec doesn't preclude this because it doesn't make any sense to impose such a restriction, which is why the spec doesn't *need* to say it. > >It wouldn't. But a generic client never ever will send a DELETE request >to something it already successfully deleted. > Well, curl is a pretty generic client, and it has no problem with sending a DELETE request anywhere. If this restriction existed, wouldn't the client first have to do a HEAD request to make sure the resource wasn't already deleted, before sending a DELETE? Funny, I don't see that in the spec... Try it for yourself. Set up a 410 response, then DELETE it using curl or any other HTTP client capable of a DELETE, then try telling me your server is not seeing that request? The only thing a client is allowed to do after a successful DELETE is mark a cache entry stale, and remove a bookmark, but not any assumptions beyond that -- especially refusing to honor the user's request for any reason beyond auth failure. If you have a client that refuses to send a DELETE request to a resource it knows responds 410 Gone, then you are not using a generic client, as that client is making assumptions beyond what the spec allows as to the permanence of a 410 response. If a 410 can be "unmarked" and changed to a 404, and there's nothing wrong with changing a 404 to a 200 by defining a resource, then it is completely wrong behavior for a client to assume permanence of a 410 response which is not written into the spec anywhere. > >So what if there is a proxy that doesn't even forward the second DELETE >because it already knows about the previous DELETE? Or if the >XmlHttpRequest object follows the spec and assumes that if a GET/HEAD on >a resource once returned a 410, it doesn't make sense to access that URI >again? > Julian, that behavior is simply not in the spec. There is nothing about RFC 2616 which states that a request resulting in a 410 response can't be repeated, not even a SHOULD NOT, any more than it says that about a 404 response. If the spec allows me to change a 410 into a 404 then why would the spec also forbid clients from ever attempting to access a resource once a 410 has been received? That would be contradictory, thankfully that's not what RFC 2616 says. As to intermediaries, I only see one action allowed in response to a successful DELETE request passing through that intermediary -- marking any preexisting cache entry for that resource as stale. If some intermediary misbehaves because it is disobeying the spec, there's really nothing I can do about it besides hope it isn't on the path between me and my server. It would definitely be an error for a proxy to refuse to forward any DELETE request. Please cite your reference for this. -Eric
> >So what if there is a proxy that doesn't even forward the second DELETE >because it already knows about the previous DELETE? Or if the >XmlHttpRequest object follows the spec and assumes that if a GET/HEAD on >a resource once returned a 410, it doesn't make sense to access that URI >again? > I don't believe every HTTP client in existence is broken. Point any web browser, on any platform, at a 410 response. Now, hit "reload". See? The reload button is still available, and the client will indeed repeat the request. Every time. 4xx responses are not cacheable because they do not indicate permanence. -Eric
On 7/12/07, Eric J. Bowman <eric@...> wrote: > > I don't believe every HTTP client in existence is broken. Point any web > browser, on any platform, at a 410 response. Now, hit "reload". See? > The reload button is still available, and the client will indeed repeat > the request. Every time. 4xx responses are not cacheable because they > do not indicate permanence. Incorrect. RFC 2616, section 13.4: "A response received with a status code of 200, 203, 206, 300, 301 or 410 MAY be stored by a cache and used in reply to a subsequent request, subject to the expiration mechanism, unless a cache-control directive prohibits caching." <http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.4> -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
At Thu, 12 Jul 2007 23:24:05 +0000, "Eric J. Bowman" <eric@...> wrote: > > [Julian? wrote] > > > >So what if there is a proxy that doesn't even forward the second DELETE > >because it already knows about the previous DELETE? Or if the > >XmlHttpRequest object follows the spec and assumes that if a GET/HEAD on > >a resource once returned a 410, it doesn't make sense to access that URI > >again? > > > > I don't believe every HTTP client in existence is broken. Point any web > browser, on any platform, at a 410 response. Now, hit "reload". See? > The reload button is still available, and the client will indeed repeat > the request. Every time. 4xx responses are not cacheable because they > do not indicate permanence. The fact that *at the moment* web browsers do not take into account the difference between a 404 & 410 and reload a 410 is irrelevant. Would you consider it an error if a browser saved a little bandwidth and didn’t really reload a 410? I wouldn’t. A browser trusts the server. If a server sends a 410, it means it. As an example, browsers issue conditional GETs when reloading: in other words, they trust the server to send them the new representation if it’s actually new. Do you consider this an error? Additionally, if a HEAD returns 410, why should a software client be expected to believe that a DELETE might have any meaningful effect on that resource? best, Erik Hetzner ;; Erik Hetzner, California Digital Library ;; gnupg key id: 1024D/01DB07E3
On 7/12/07, Karen <karen.cravens@...> wrote: > On 7/12/07, Jon Hanna <jon@...> wrote: > > In REST resources don't contain other resources, they link to them. > > Hmm. Can they not do both? (Semantically, anyway.) Sure. > > Say your hierarchy is author/book/chapter/page, where the page > contains text that is what your reader is going to think of as the > "real" content. That's not a good example of a hierarchy IMO. Resource containment means, amoungst other things, that the lifecycle of the resources are such that if the containing resource goes away, so does the contained resource. So if you got rid of the author, do her books vanish? Nope, so that's not containment. But if you got rid of a book, then its chapters and pages go with it. That is containment. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
A. Pagaltzis wrote: > Impendance mismatch with my language’s data model is not a > feature, it’s a liability. Serialized formats that are tied to one language are a liability, not a feature. > If hash maps in my language can only > have unique keys, I want a format that enforces this constraint > at the parser level, so that ill-formed messages are defined out > of existence, freeing me from ever having to deal with them at a > higher level in the application. Serialized formats that restrict what you can say are a liability. XML usually lets you express what the data actually is, without too many contortions (at least until overlap rears its head). Hashtables don't. :-( -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
> >Incorrect. > >RFC 2616, section 13.4: >"A response received with a status code of 200, 203, 206, 300, 301 or >410 MAY be stored by a cache and used in reply to a subsequent >request, subject to the expiration mechanism, unless a cache-control >directive prohibits caching." > You're right, I overlooked that. But I don't see it as a deal-breaker. A client receiving a 410 Gone in response to a DELETE request, with an explicit no-cache directive, must send any subsequent DELETE request. In the presence of a no-cache directive, an intermediary MUST forward that request. The downside to this is, seeing as how a 410 Gone response is cacheable, and the most-likely request for such a resource will be GET, the system would not be as scaleable as it might otherwise. In fact, being able to cache a 410 response is a good argument in favor of preferring 410 over 404 for resources that have been deleted. Which brings me back to requiring If-None-Match on the first DELETE using the existing cache headers, and allowing 410 responses to be cached. -Eric
On 7/12/07, Mark Baker <distobj@...> wrote: > That's not a good example of a hierarchy IMO. Resource containment > means, amoungst other things, that the lifecycle of the resources are > such that if the containing resource goes away, so does the contained > resource. So if you got rid of the author, do her books vanish? > Nope, so that's not containment. But if you got rid of a book, then > its chapters and pages go with it. That is containment. Well, in the example the "author" really just means "a collection of one or more books by the same person." And we don't have multi-author works or complications like that. Work with me, here. Er, actually, zero or more books, now that I think about it. But yes, if you got rid of an "author," that means getting rid of all the books (and chapters, and pages) by that author. (Whether a delete cascade like that is permissible, or whether you serve a 4xx for a non-empty author depends on what the chances of accidental deletion are, and what the consequences of doing so are, I imagine.) Patrick's Atom feed bit is actually a pretty good example, and I would have used it except I was afraid we'd get off in the weeds since I don't know the spec that well. Other than that, it's a useful question, if I generalize it to "a RESTful blog site" and don't specify what format the feeds are in. Well, I suppose I could say "RSS" without specifying a flavor, and then no one could pin me down on syntax... So okay, suppose the hierarchy is category/blog/label/thread/entry, where one "entry" in a thread is the blog post and the rest are comments. And where "label" is one of those subcategory sorts of things, not a tag that entries can have none or a dozen of. Deleting a blog requires deleting its labels, deleting a label requires deleting its threads, deleting a thread requires deleting its entries. (What you probably DON'T want to get into is accepting PUTs. Category-for-its-own-sake is a different resource, because having to know and PUT every blog, label, thread, and entry within one just to change a spelling mistake in its description is a bit over the top...) So given that example, suppose the reader wants to see the most recent 25 entries in a *category*... what ought that resource look like, roughly? Is it a special type of entry resource like http://blogsite/entry?catname=categoryiwant or a special form of the category resource like http://blogsite/category/catname?entries=full or something else entirely? Or maybe it's a special type of thread since thread already is a multi-entry type of resource? That starts to sound like a "something else entirely," since it may share syntax with "thread" but it certainly isn't semantically a thread.
Perhaps next we could take up Ford vs. Chevy? Coke vs. Pepsi? PC vs. Mac?
> >The fact that *at the moment* web browsers do not take into account >the difference between a 404 & 410 and reload a 410 is irrelevant. >Would you consider it an error if a browser saved a little bandwidth >and didn’t really reload a 410? I wouldn’t. > Well, I do stand corrected, 410 responses MAY be cached. However, if I have marked the 410 response explicitly with a no-cache directive I would indeed consider that an error, just as I would consider it to be an error if a client cached a 404 response. > >Additionally, if a HEAD returns 410, why should a software client be >expected to believe that a DELETE might have any meaningful effect on >that resource? > Good point, unless the 410 is marked no-cache. -Eric
I said "HTTP is not REST" and what I mean by that is RFC 2616 doesn't define the REST architectural style. You then replied as if to contradict that statement: > >HTTP is a protocol for REST applications. > Which sounds to me like you are arguing that RFC 2616 defines the term, "REST application" when in reality it makes no mention of REST, or uniform interfaces, or anything else. This implied to me that your meaning was that any implementation of HTTP that doesn't violate RFC 2616 is automatically a REST application. If that was not your meaning, then I apologize for misinterpreting your words. > >What were you trying to argue, again? You have not said a single >thing that contradicts anything I said. > I might ask you the same question, if your response to my assertion that HTTP doesn't define REST singles out that statement for rebuttal -- when you could just as easily ask me to clarify my meaning. If anyone is still confused, when I say HTTP is not REST I mean that RFC 2616 doesn't begin to describe an architectural style because it's only a protocol. For example, XML-RPC doesn't violate RFC 2616 but it does break the constraints of the REST architectural style. Ergo, HTTP is not REST, as HTTP allows many things to be done which go directly against REST, and it is not possible to infer anything about REST from reading RFC 2616. > >You are talking about a universal interface, not just a uniform >interface. > I'm sorry, but I do not find the word "universal" in the definition of REST given in Chapter 5. The only reference I see is in 4.1: "The challenge was to build a system that would provide a universally consistent interface to this structured information, available on as many platforms as possible, and incrementally deployable as new people and organizations joined the project." This implies to me that the Uniform Interface constraint meets the design goal of a universal interface. All snide remarks aside, what do _you_ see as the difference between "generic interface", "universal interface" and "uniform interface"? I see those terms as interchangeable and define them all to mean the same thing as "REST connector". A REST connector is a network API where the semantics of each interaction uniformly fit a universal interface model so long as the semantics of the methods used are kept generic. We can debate semantics for the rest of the year, I still say to meet this constraint PUT must be given the same meaning in HTTP as STOR has in FTP but I will stand on my two earlier explanations for why this must be so, and not repeat myself here. -Eric
[ Attachment content not displayed ]
Eric J. Bowman wrote: > ... > So I still do not see where any restriction exists, which says that if I > request a resource that's nonexistent the 410 response can't have an ETag, > or how that ETag does not pertain to the requested variant -- or how > doing so causes any problems for any clients or intermediaries. > ... It just doesn't make sense to accept a DELETE, and subsequently return 404 or 410, and to still claim that there is an representation (with that etag) mapped to that URI. The point of a DELETE is to remove that mapping, and the point of 404/410 is so that the server can signal that there are no representations left mapped to that URI. > The purpose of the spec is to allow interoperability. If my > implementation does not cause interoperability problems then I don't see > where the spec has been violated, or needs to be changed to disallow > what I have implemented. The same can be said about SOAP (I guess), so would you defend SOAP as well? > >> RFC 2616 doesn't specifically state that ETags may be used on 4xx > >> responses, then doing so goes against spec? Well, I disagree. > > > >They don't make sense upon 404/410 because of their definition. > > > > They don't make sense *to you* because of their definition, but it > makes sense *to me* as I see nothing in those definitions which > precludes doing what I am doing. <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.10.4.5>: "10.4.5 404 Not Found The server has not found anything matching the Request-URI...." It seems that you say that: - there's nothing here, - but it has a representation, - but if you ask for it, I'll not send it to you. Sorry, doesn't compute. > >> I'm using HTTP machinery in a way nobody has tried before, this is not > >> the same thing as "inventing a new protocol" nor is it "against" RFC > >> 2616. If it were a "bad thing" to send an ETag with a 410 response > >> then the spec would say so. If the spec had to specifically account > > > >No, that's not how specs are written. But maybe now we'll have to put it > >into RFC2616bis :-) > > > > OK, but only if you can justify why this restriction is needed despite > the fact that no interoperability problems result if it is not met. I do not believe there'll be no interop problem. But besides that, the same could be said about a protocol that tunnels everything through POST, right? > >Returning it with a 404/410 does not make sense, and that's why the spec > >doesn't *need* to say it. > > > > Or, from my point of view, the spec doesn't preclude this because it > doesn't make any sense to impose such a restriction, which is why the > spec doesn't *need* to say it. OK, go on ignoring the definitions on 404 and 410. > >It wouldn't. But a generic client never ever will send a DELETE request > >to something it already successfully deleted. > > > > Well, curl is a pretty generic client, and it has no problem with > sending a DELETE request anywhere. If this restriction existed, wouldn't > the client first have to do a HEAD request to make sure the resource > wasn't already deleted, before sending a DELETE? Funny, I don't see > that in the spec... Oh well. That's when you invoke curl *twice*, and the second instantiation has no knowledge about what happened before. Things may look entirely different if you're using a HTTP stack that does have such a kind of memory. > Try it for yourself. Set up a 410 response, then DELETE it using curl > or any other HTTP client capable of a DELETE, then try telling me your > server is not seeing that request? The only thing a client is allowed > to do after a successful DELETE is mark a cache entry stale, and > remove a bookmark, but not any assumptions beyond that -- especially > refusing to honor the user's request for any reason beyond auth failure. So an HTTP stack that internally implements a cache and does not forward a GET request to the origin server when it already has the answer is broken? If you really think so, I'd recommend that your review the XHR working draft (<http://dev.w3.org/cvsweb/~checkout~/2006/webapi/XMLHttpRequest/Overview.html>). > If you have a client that refuses to send a DELETE request to a resource > it knows responds 410 Gone, then you are not using a generic client, as > that client is making assumptions beyond what the spec allows as to the > permanence of a 410 response. If a 410 can be "unmarked" and changed to Sorry? <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.10.4.11>: "10.4.11 410 Gone The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent...." > a 404, and there's nothing wrong with changing a 404 to a 200 by defining > a resource, then it is completely wrong behavior for a client to assume > permanence of a 410 response which is not written into the spec anywhere. Unless it is, see above. > >So what if there is a proxy that doesn't even forward the second DELETE > >because it already knows about the previous DELETE? Or if the > >XmlHttpRequest object follows the spec and assumes that if a GET/HEAD on > >a resource once returned a 410, it doesn't make sense to access that URI > >again? > > > > Julian, that behavior is simply not in the spec. There is nothing about > RFC 2616 which states that a request resulting in a 410 response can't > be repeated, not even a SHOULD NOT, any more than it says that about a > 404 response. If the spec allows me to change a 410 into a 404 then There's no point in forbidding it. You may want to repeat it as often as you want. But that you are allowed to do that doesn't mean it makes sense. > why would the spec also forbid clients from ever attempting to access a > resource once a 410 has been received? That would be contradictory, > thankfully that's not what RFC 2616 says. It doesn't forbid clients to do that. It just says that once a server said "410" once, clients can assume they don't need to. > As to intermediaries, I only see one action allowed in response to a > successful DELETE request passing through that intermediary -- marking > any preexisting cache entry for that resource as stale. If some > intermediary misbehaves because it is disobeying the spec, there's > really nothing I can do about it besides hope it isn't on the path > between me and my server. It would definitely be an error for a proxy > to refuse to forward any DELETE request. Please cite your reference > for this. As others have pointed out, the spec clearly says that a 410 response is cache able unless marked otherwise: <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.13.4.p.4>: "A response received with a status code of 200, 203, 206, 300, 301 or 410 MAY be stored by a cache and used in reply to a subsequent request, subject to the expiration mechanism, unless a cache-control directive prohibits caching." Best regards, Julian
John D. Heintz wrote: > > Elliotte, are you serious that this is a bad thing? Very serious. > You are arguing that adding constraints to something (instead of an > arbitrarily extensibility) is a bad thing. > > Isn't this an ironic position to take - on the REST discuss list? > Yes, it's ironic, but not wrong. You're quite perceptive to notice the connection, and bring this discussion back on topic. Schema enforced validity, be the the schema REST or something else, is a problem and an antipattern, for much the same underlying reasons. Syntax is interoperable. Semantics are not. > Let me rewrite your quote above. I'm going to substitute "constraint" > for "semantic", and "REST" for "JSON": > "Every constraint REST adds makes it weaker and less suitable for > network interchange of data and data modeling." Almost. The difference is that REST is merely a schema language, not a data language. I would say, every semantic REST imposes on an XML vocabulary makes the vocabulary weaker and less suitable for network interchange of data and data modeling. That doesn't mean REST and other schema languages aren't useful, just that they should not be used to impose semantics on documents. It is the entire enterprise of specifying one and only one possible interpretation of a given document that is flawed. JSON bakes this falw into itself much deeper than XML does. XML separates validity from well-formedness, and allows for a plethora of schemas and schema languages. JSON does not. In JSON syntax and semantics are inextricably intertwined. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Story Henry wrote: > Other provincialisms of the example > above is to assume that a post code is a number. Clearly they have never > lived in the UK! Or New Jersey. To this day, I see mailing labels with four digit zip codes because somebody stuck a zip code in an int field somewhere. (For those outside the U.S. Northeast, New Jersey zip codes all begin with "0"; e.g. 09748) -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 13 Jul 2007, at 02:46, Elliotte Harold wrote:
> A. Pagaltzis wrote:
>
> > Impendance mismatch with my languages data model is not a
> > feature, its a liability.
>
> Serialized formats that are tied to one language are a liability,
> not a
> feature.
>
I agree very much. Languages are usually defined as a syntax with a
semantics. What is needed is to disassociate the
syntax from the semantics. If we keep the semantics stable, as we do
by choosing to work with URIs (which are universal names - names
name things), we can change the syntax and allow a natural selection
of syntaxes. JSON with a good mapping could be an interesting syntax.
But in fact JSON is only half way there. Let us look at an example
from the JSON wikipedia page:
{
"firstName": "John",
"lastName": "Smith",
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": 10021
},
"phoneNumbers": [
"212 732-1234",
"646 123-4567"
]
}
(Now reading the spec, it says that an "object is an unordered set of
name-value pairs". That means it cannot be a hash map since a hash
map forces only single keys, and there is no restriction there on
single keys. Seems to me that if you map that into an object in a
simplistic way, you are going to loose data.)
Now the problem it seems to me with JSON is that it both has a
beginning of a semantics, it has types for true, false, and numbers,
and at the same time it does not have enough. The spec is completely
at the syntactic level. The semantics it has come from it being so
closely tied to JavaScript, which has a procedural semantics. Number
refer to numbers because that's the way JavaScript will interpret them.
As I mention elsewhere, the other problem is the lack of identity of
things. "firstName" is a relation relating to what we can clearly see
to be a Person object to the string "John". What if somewhere else I
find a french site that has "prenom" and "nom" instead. How can I say
that these two words refer to the same relation? I can't really
because there is so much that is underspecified.
In fact the more I look at the example the more I notice the same
provincialism that lead to all the problems with xmlrpc [1] such as
that XML-RPC not defining a time zone for dates. When working on the
internet you have to think globally, or else you are still stuck in
client-server mode of thinking. Just as the xmlrpc folk never seemed
to have realized that the world may have more than one time zone, so
it is clear that JSON was not designed with the thought that data
would be traveling in a global space with no context (you know on the
web you can link any two resources together, so you don't know ahead
of time where people are going to come from, and how they are going
to mesh up the data). This should not be surprising because
javascript is a scripting language that is meant to only work within
a page. Other provincialisms of the example above is to assume that a
post code is a number. Clearly they have never lived in the UK! Or to
give phone numbers out without a country prefix! Oh my god! people on
the internet may find this information from another country? Well
should they not know that the US prefix is +1? Yeah! I was in Prague
recently and the people there assumed everywhere that you knew not
just the country prefix, but what the local Prague prefix was mean to
be. You want to book a hotel in Prague from another country? You must
be crazy.
Let me rewrite the above example in the Turtle subset of N3, and also
give the person a name:
@prefix : <http://xmlns.com/foaf/0.1/> .
@prefix contact: <http://www.w3.org/2000/10/swap/pim/contact#> .
<http://eg.com/joe#p>
a :Person;
:firstName "John";
:family_name "Smith";
contact:home [
contact:address [
contact:city "New York";
contact:country "New York";
contact:postalCode "10021";
contact:street "21 2nd Street";
]
];
foaf:phone <tel:+1-212-732-1234>, <tel:+1-646-123-4567>;
.
Now this may require a little learning curve - but frankly not that
much - to understand. But it has the following advantages:
1. you can know what any of the terms mean by clicking on them
(append the prefix to the name) and do a GET
2. you can make statements of equality between relations and things,
such as
:firstname = frenchfoaf:prenom .
3. you can infer things from the above, such as that
<http://eg.com/joe#p> a :Agent .
4. you can mix vocabularies from different namespaces as above, just
as in Java you can mix classes developed by
different organisations. There does not even seem to be the notion
of a namespace in JSON, so how would you reuse the work of others?
5. you can split the data about something in pieces. So you can put
your information about <http://eg.com/joe#p> at the "http://eg.com/
joe" URL, in a restful way, and other people can talk about him at
their URL. I could for example add the following to my foaf file:
<http://bblfish.net/people/henry/card#me> :knows <http://eg.com/
joe#p> .
You can't do that in a standard way in JSON because it does not
have a URI as a base type (weird for a language that wants to be a
web language, to miss the core element of the web! yet it has true,
false and numbers!)
Now that does not mean JSON can't be made to work right, as the
SPARQL JSON result set serialisation does [2]. But it does not do the
right thing by default. A bit like languages before Java that did not
have unicode support by default. You could do the right thing if you
knew a lot. But most people just got into bad habits instead.
Henry
[1] "Some of the many limitations of the MetaWeblog API":
http://bblfish.net/blog/page7.html#2005/06/20/22-28-18-208
[2] http://www.w3.org/TR/rdf-sparql-json-res/
>
>
> > If hash maps in my language can only
> > have unique keys, I want a format that enforces this constraint
> > at the parser level, so that ill-formed messages are defined out
> > of existence, freeing me from ever having to deal with them at a
> > higher level in the application.
>
> Serialized formats that restrict what you can say are a liability.
>
> XML usually lets you express what the data actually is, without too
> many
> contortions (at least until overlap rears its head). Hashtables
> don't. :-(
>
Home page: http://bblfish.net/
Sun Blog: http://blogs.sun.com/bblfish/
Foaf name: http://bblfish.net/people/henry/card#me
Karen wrote: > So okay, suppose the hierarchy is category/blog/label/thread/entry, > where one "entry" in a thread is the blog post and the rest are > comments. And where "label" is one of those subcategory sorts of > things, not a tag that entries can have none or a dozen of. Deleting a > blog requires deleting its labels, deleting a label requires deleting > its threads, deleting a thread requires deleting its entries. (What > you probably DON'T want to get into is accepting PUTs. If I understand you right you mean that there could be a uri of http://example.net/catX/blogY/labelZ/threadα/entryβ and one of http://example.net/catX/blogY/labelZ/threadα/ and one of http://example.net/catX/blogY/labelZ/ and so on. This is all well and good - indeed I highly recommend these sorts of hierarchical URIs. However as far as REST is concerned there is no inherent relationship between any of those URIs. It is extremely easy for a representation of entryβ to show it's relationship to threadα though as the relative URI is .. and most likely the code that builds the representation of threadα will just always contain a link to .. and depend upon the rest of the system to make sure it's only going to come into effect somewhere where that code makes sense. Maybe I'm not getting what you're saying though.
Eric J. Bowman wrote: >> So what if there is a proxy that doesn't even forward the second DELETE >> because it already knows about the previous DELETE? Or if the >> XmlHttpRequest object follows the spec and assumes that if a GET/HEAD on >> a resource once returned a 410, it doesn't make sense to access that URI >> again? >> > > I don't believe every HTTP client in existence is broken. Point any web > browser, on any platform, at a 410 response. Now, hit "reload". See? On a more general point this doesn't prove anything. Firstly, I'm quite willing to believe that every HTTP client in existence is broken, albeit probably only for edge cases. I'm more than willing to believe that every HTTP client in existence is sub-optimal - whether by oversight, flaw or conservative assumptions about the compliance of other HTTP agents (possibly *correct* conservative assumptions). There is never a case where not caching is not allowed, and only a handful where repeating a request in full is not allowed (that I can see, only when a proxy has been sent "only-if-cached", though maybe I'm missing other cases). That's not being broken, indeed it's very useful, since we can have more complicated caching requirements than everything can cope with and still have things work, since they're never wrong in just not caching.
Hi all, The last days I have created three screencasts on REST Describe & Compile, my WADL editor and compiler application written with the Google Web Toolkit (http://tomayac.de/rest-describe/latest/RestDescribe.html). You can check out the screencasts on my blog: http://blog.tomayac.de/index.php?date=2007-07-13&time=13:15:44&perma=REST+Describe+%26+Comp.html ======= The new v0.4.1 supports "Ajaxy" automatic namespace and XML schema discovery. This makes the generated WADL files even better. It can be seen in the BBC API Walkthru screencast. ======= * BBC API Walkthru: http://www.youtube.com/watch?v=ZlbYxiraW7k * Screencast: http://www.youtube.com/watch?v=FXII4kYxmAY * Project Introduction: http://www.youtube.com/watch?v=yizpeiMSbnA Thank you very much for your interest, looking forward to hearing back from you. Have a great weekend! Cheers, Tom -- Thomas Steiner http://blog.tomayac.de mailto:Steiner(DOT)Thomas(AT)gmail[DOT]com
On 7/13/07, Jon Hanna <jon@hackcraft.net> wrote: > However as far as REST is concerned there is no inherent relationship > between any of those URIs. It is extremely easy for a representation of > entry to show it's relationship to thread though as the relative URI > is .. and most likely the code that builds the representation of thread > will just always contain a link to .. and depend upon the rest of the > system to make sure it's only going to come into effect somewhere where > that code makes sense. Right, and I'm just using the URIs as shorthand to demonstrate the relationships for purposes of discussion. They could be opaque for all the system cares. But for purposes of the example, parallel URI structure represents parallel representation. In practice, the URIs won't be opaque, and it's entirely possible that some users will manually edit/construct them in the browser address bar (I do, when I'm debugging and want to leap from one part of the system to another), so to a certain extent having sensible structure there *is* kind of a design goal, but the form should follow the function there so I'm not too concerned. > Maybe I'm not getting what you're saying though. I'm trying to figure out if there are any gotchas for having resources (either variants on existing ones, or wholly new ones) that "skip" hierarchy levels - and if not, what the most intuitive, logical way to express those resources is.
Karen suggests: > Perhaps next we could take up Ford vs. Chevy? > Coke vs. Pepsi? > PC vs. Mac? Naw....let's stick to REST vs WS-*. Much more fun. ;-) Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
On Fri, Jul 13, 2007 at 06:10:04AM -0400, Elliotte Harold wrote: > REST is merely a schema language Now I'm totally lost. I thought REST was an architectural style. Maybe I don't know what you mean by "schema" or "language". -- Paul Winkler http://www.slinkp.com
At Fri, 13 Jul 2007 01:32:49 +0000, "Eric J. Bowman" <eric@...> wrote: > [I wrote:] > >Additionally, if a HEAD returns 410, why should a software client be > >expected to believe that a DELETE might have any meaningful effect on > >that resource? > > > > Good point, unless the 410 is marked no-cache. I’m not expert in HTTP caching, but it doesn’t seem like an error to me for an intermediary to forward a DELETE to a server, receive a 410 with a no-cache from the server, then receive a 2nd DELETE from the client, perform a HEAD on the server URI to check that, yes, it is still a 410, and send the old DELETE response back. no-cache means no cache without revalidation, not no-cache full stop. best, Erik Hetzner
> >It just doesn't make sense to accept a DELETE, and subsequently return >404 or 410, and to still claim that there is an representation (with >that etag) mapped to that URI. The point of a DELETE is to remove that >mapping, and the point of 404/410 is so that the server can signal that >there are no representations left mapped to that URI. > Sending an ETag with a 410 response indicates that there is a 200 OK representation available, how? The 410 Gone status is hardly overridden by the presence of an ETag. > >> The purpose of the spec is to allow interoperability. If my >> implementation does not cause interoperability problems then I don't see >> where the spec has been violated, or needs to be changed to disallow >> what I have implemented. > >The same can be said about SOAP (I guess), so would you defend SOAP as well? > I know nothing of SOAP, are you saying RFC 2616 needs to be altered to disallow SOAP? Why? Does SOAP violate RFC 2616? > >"10.4.5 404 Not Found > >The server has not found anything matching the Request-URI...." > Where does that say the 404 response itself can't have an ETag? The presence of an ETag says nothing about the state of the resource, only that the response body has an ETag. You are very good at reiterating your same point, as if the more times you say the same thing the more wrong it makes me... > >It seems that you say that: > >- there's nothing here, > Right, that is certainly implied by a 404 or a 410 response... > >- but it has a representation, > No, I say the representation of the 404 or 410 response is an entity, and entities may be tagged. > >- but if you ask for it, I'll not send it to you. > If you keep repeating the same request, you'll keep getting a 404 or a 410 response, the presence of an ETag indicates that some other response is available, how? > >Sorry, doesn't compute. > Nor does your repeated assertion that, by the very presence of an ETag, a 404 or a 410 response can be assumed to mean something besides "Not Found" or "Gone". Since a 410 Gone response is cacheable, shouldn't the spec say something about only using Last-Modified and other headers, while avoiding ETag? Your argument is that cache-control headers are not allowed on cacheable 410 responses? Why not? > >> OK, but only if you can justify why this restriction is needed despite >> the fact that no interoperability problems result if it is not met. > >I do not believe there'll be no interop problem. But besides that, the >same could be said about a protocol that tunnels everything through >POST, right? > Yet you have failed to explain just what problems will arise. If a protocol tunnels everything through POST, it violates RFC 2616 how? Why do we need to change the spec to disallow POST tunneling? Because to do so is un-RESTful? HTTP is not REST, there are a wide variety of possibilities not restricted by RFC 2616 -- like SOAP or POST tunneling -- which interoperate just fine. > >OK, go on ignoring the definitions on 404 and 410. > You keep making claimes about 404 and 410 which are simply not in the spec, then you accuse me of ignoring the spec? Strawman arguments don't hold much water with me. > >Oh well. That's when you invoke curl *twice*, and the second >instantiation has no knowledge about what happened before. > >Things may look entirely different if you're using a HTTP stack that >does have such a kind of memory. > Look, I already said "I stand corrected" on the cacheability of a 410 response. Did you not read that? I must've said it three times so far. > >So an HTTP stack that internally implements a cache and does not forward >a GET request to the origin server when it already has the answer is >broken? If you really think so, I'd recommend that your review the XHR >working draft Look, I already said "I stand corrected" on the cacheability of a 410 response. Did you not read that? I must've said it four times so far. > > Sorry? <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.10.4.11>: > Look, I already said "I stand corrected" on the cacheability of a 410 response. Did you not read that? I must've said it five times so far. > >Unless it is, see above. > Look, I already said "I stand corrected" on the cacheability of a 410 response. Did you not read that? I must've said it six times so far. > >As others have pointed out, the spec clearly says that a 410 response is >cache able unless marked otherwise: > Look, I already said "I stand corrected" on the cacheability of a 410 response. Did you not read that? I must've said it seven times so far. I wonder when anyone will acknowledge that I have admitted my error? -Eric
Eric J. Bowman wrote: > >It just doesn't make sense to accept a DELETE, and subsequently return > >404 or 410, and to still claim that there is an representation (with > >that etag) mapped to that URI. The point of a DELETE is to remove that > >mapping, and the point of 404/410 is so that the server can signal that > >there are no representations left mapped to that URI. > > > > Sending an ETag with a 410 response indicates that there is a 200 OK > representation available, how? The 410 Gone status is hardly > overridden by the presence of an ETag. Both information items are in contradiction. It's not relevant whether the status code overrides the ETag, or vice versa. > >> The purpose of the spec is to allow interoperability. If my > >> implementation does not cause interoperability problems then I don't see > >> where the spec has been violated, or needs to be changed to disallow > >> what I have implemented. > > > >The same can be said about SOAP (I guess), so would you defend SOAP as > well? > > > > I know nothing of SOAP, are you saying RFC 2616 needs to be altered to > disallow SOAP? Why? Does SOAP violate RFC 2616? No, I didn't say that. I just mentioned SOAP because it seems to be a well-known example about how you can be HTTP compliant, without being Restful or even using HTTP in the best way. > >"10.4.5 404 Not Found > > > >The server has not found anything matching the Request-URI. ..." > > > > Where does that say the 404 response itself can't have an ETag? The > presence of an ETag says nothing about the state of the resource, Yes it does. ETag is a response header, as defined in <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.6.2>: "...These header fields give information about the server and about further access to the resource identified by the Request-URI..." > only that the response body has an ETag. You are very good at > reiterating your same point, as if the more times you say the same > thing the more wrong it makes me... Well, I'll keep reiterating what the spec says, sorry. > >It seems that you say that: > > > >- there's nothing here, > > > > Right, that is certainly implied by a 404 or a 410 response... > > > > >- but it has a representation, > > > > No, I say the representation of the 404 or 410 response is an entity, > and entities may be tagged. Yes, it is an entity, and you can tag it, but (ironically) not with the "ETag" header. <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.14.19>: "The ETag response-header field provides the current value of the entity tag for the requested variant." Now I'll be the first to agree that the term "requested variant" needs to be clarified; it seems that the authors only thought about GET/HEAD when they wrote it. > >- but if you ask for it, I'll not send it to you. > > > > If you keep repeating the same request, you'll keep getting a 404 or > a 410 response, the presence of an ETag indicates that some other > response is available, how? > > > > >Sorry, doesn't compute. > > > > Nor does your repeated assertion that, by the very presence of an ETag, > a 404 or a 410 response can be assumed to mean something besides "Not > Found" or "Gone". Since a 410 Gone response is cacheable, shouldn't No, what I'm saying is that a 404/410 response carrying an ETag does not make sense. > the spec say something about only using Last-Modified and other headers, > while avoiding ETag? Your argument is that cache-control headers are > not allowed on cacheable 410 responses? Why not? I didn't say that, and of course they are allowed. > >> OK, but only if you can justify why this restriction is needed despite > >> the fact that no interoperability problems result if it is not met. > > > >I do not believe there'll be no interop problem. But besides that, the > >same could be said about a protocol that tunnels everything through > >POST, right? > > > > Yet you have failed to explain just what problems will arise. If a > protocol tunnels everything through POST, it violates RFC 2616 how? It doesn't, but it isn't a good application of HTTP. > Why do we need to change the spec to disallow POST tunneling? We don't, and I didn't say that. > ... OK, let's just agree that we disagree and move on. Best regards, Julian
At Fri, 13 Jul 2007 18:58:34 +0000, "Eric J. Bowman" <eric@...> wrote: > Sending an ETag with a 410 response indicates that there is a 200 OK > representation available, how? The 410 Gone status is hardly > overridden by the presence of an ETag. I believe that Julian is saying, that, since ‘The ETag response-header field provides the current value of the entity tag for the requested variant.’ and that a 410 indicates that there is no variant (that is, representation), an ETag makes no sense. best, Erik Hetzner ;; Erik Hetzner, California Digital Library ;; gnupg key id: 1024D/01DB07E3
I have written out my take on the discussion in this thread in a post "The limitations of JSON" http://blogs.sun.com/bblfish/entry/the_limitations_of_json which covers what I said here, but with extra illustrations to make it easier to understand. Henry On 13 Jul 2007, at 12:42, Elliotte Harold wrote: > Story Henry wrote: > > Other provincialisms of the example > > above is to assume that a post code is a number. Clearly they > have never > > lived in the UK! > > Or New Jersey. To this day, I see mailing labels with four digit zip > codes because somebody stuck a zip code in an int field somewhere. > > (For those outside the U.S. Northeast, New Jersey zip codes all begin > with "0"; e.g. 09748) >
On 7/13/07, Jon Hanna <jon@hackcraft.net> wrote: > Karen wrote: > > So okay, suppose the hierarchy is category/blog/label/thread/entry, > > where one "entry" in a thread is the blog post and the rest are > > comments. And where "label" is one of those subcategory sorts of > > things, not a tag that entries can have none or a dozen of. Deleting a > > blog requires deleting its labels, deleting a label requires deleting > > its threads, deleting a thread requires deleting its entries. (What > > you probably DON'T want to get into is accepting PUTs. > > If I understand you right you mean that there could be a uri of > http://example.net/catX/blogY/labelZ/thread/entry and one of > http://example.net/catX/blogY/labelZ/thread/ and one of > http://example.net/catX/blogY/labelZ/ and so on. > > This is all well and good - indeed I highly recommend these sorts of > hierarchical URIs. > > However as far as REST is concerned there is no inherent relationship > between any of those URIs. I think there is, as I tried to describe in my last message. You disagree? Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
On 7/13/07, Mark Baker <distobj@...> wrote: > > However as far as REST is concerned there is no inherent relationship > > between any of those URIs. > > I think there is, as I tried to describe in my last message. You disagree? I read that as being that the URI itself can't define the relationship; it has to be supported via links and so forth in the representation to be properly RESTful. I'm good with that, like I said... I'm just using the nekkid urls as shorthand for the representations, which might require me to pick a side in the Great JSON vs. XML War...
On 7/14/07, Josh Sled <jsled@...> wrote: > "Bob Haugen" <bob.haugen@...> writes: > > So it seems like json binds client and server together to the extent > > that the client needs to know the quirks of the json structure pretty > > intimately... > > Is this uniquely true of JSON? It seems like it's true of XML as well ... if > "xAL" creates different forms for different addresses, they're fundamentally > different. And I don't see how pathing solves the problem, really. I don't know what I was thinking of. You are correct, of course. I am embarrassed.
Robert Sayre wrote: > XML is treated as executable code all the time. > Now that's an unsupported assertion. XML is not a Turing complete progrmaming language for good reason. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Paul Winkler wrote: > On Fri, Jul 13, 2007 at 06:10:04AM -0400, Elliotte Harold wrote: >> REST is merely a schema language > > Now I'm totally lost. I thought REST was an architectural style. > Maybe I don't know what you mean by "schema" or "language". > My bad. Somehow I was thinking: "RELAX" and typing REST. I have to go back and reread the original post and see what it said about this. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
* Elliotte Harold <elharo@...> [2007-07-13 02:55]: > A. Pagaltzis wrote: > > Impendance mismatch with my language’s data model is not a > > feature, it’s a liability. > > Serialized formats that are tied to one language are a > liability, not a feature. I agree. Good thing that JSON is not. > > If hash maps in my language can only have unique keys, I want > > a format that enforces this constraint at the parser level, > > so that ill-formed messages are defined out of existence, > > freeing me from ever having to deal with them at a higher > > level in the application. > > Serialized formats that restrict what you can say are a > liability. When you want to deserialise a data structure, a serialisation format that restricts the data to the deserialised model… is… a liability…? I fail to follow. Oh yeah, and I notice the non-unique keys example I gave somehow went uncommented… :-) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 7/15/07, Elliotte Harold <elharo@...> wrote: > Robert Sayre wrote: > > > XML is treated as executable code all the time. > > > > Now that's an unsupported assertion. XML is not a Turing complete > progrmaming language for good reason. ...neither is JSON. You were discussing properties of evaluators. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
* Eric J. Bowman <eric@...> [2007-07-13 04:00]:
> I still say to meet this constraint PUT must be given the same
> meaning in HTTP as STOR has in FTP but I will stand on my two
> earlier explanations for why this must be so, and not repeat
> myself here.
In another message you said:
My application uses one URL and content negotiation to serve
four different "text/html" representations and three
"application/xhtml+xml" representations (plus one Atom and
one PDF) depending on client capability, so I would have to
say that I am keenly aware of the separation between the HTTP
resource/representation model and the file-centric models of
FTP and WebDAV.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
* Jon Hanna <jon@...> [2007-07-13 12:50]: > Firstly, I'm quite willing to believe that every HTTP client > in existence is broken, albeit probably only for edge cases. > > I'm more than willing to believe that every HTTP client in > existence is sub-optimal - whether by oversight, flaw or > conservative assumptions about the compliance of other HTTP > agents (possibly *correct* conservative assumptions). Note that there has never been an even slightly comprehensive test suite for any aspect of HTTP. As Mark Nottingham’s recent research into proxies has shown, implementations of HTTP tend to be largely to entirely compliant as long as you are using the well-beaten path of RFC 2616 and they get progressively less consistent with rarity of use. This is consistent with his earlier research about XMLHttpRequest implementations as well as other results, such as some other survey of browser compliance with the prescribed behaviour upon a variety of redirects under different conditions I have read. All of them indicate that the browser-related subset of HTTP is, for the most part, well implemented by everyone; if you go beyond that, hic sunt dracones. Without a test suite covering oddball or uncommon features, implementors are liable to make mistakes with their implementation or just outright punt on them. Thereby materialises a cycle in which no clients make use of these features because they are badly supported by servers, and no servers support these features because no clients need them. Therefore it is true indeed: every HTTP client in existence is suboptimal if not outright broken. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Karen <karen.cravens@...> [2007-07-13 16:25]: > I'm trying to figure out if there are any gotchas for having > resources (either variants on existing ones, or wholly new > ones) that "skip" hierarchy levels - and if not, what the most > intuitive, logical way to express those resources is. You mean the representation that can be retrieved at some URI would contain the representations for all the resources it links to (or a paginated set thereof). No, there is nothing un-RESTful about that. Just make sure you include the links to each separate resource whose representation is included. Patrick mentioned Atom feeds, which are a good example of that. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Robert Sayre wrote: > ...neither is JSON. You were discussing properties of evaluators. > In theory JSON is not Turing complete. In practice, it is. JSON is JavaScript, and JavaScript is Turing complete. You're not supposed to put stuff arbitrary JavaScript into JSON, but people can and do. eval-able formats are dangerous. Formats that mix data and code are dangerous. That's the bottom line. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
[ Attachment content not displayed ]
John D. Heintz wrote: > Elliotte, > > You are mistaken. JSON has no place for code - it is only a data format. > JSON specifically is a sub-set of JavaScript. (And also a sub-set of > YAML...) If that were true, there wouldn't be a problem. It isn't. > You may want to actually read http://json.org/ before continuing this > diatribe... > Please assume that I have read the and quite a bit more many times. The issue is not what the specs say. The issue is what code does. JSON was deliberately and carefully designed to be able to be passed to eval() and executed as Javascript. There is nothing about JSON that prevents the embedding of arbitrary JavaScript code. This is by design, not accident. Waving your hands and crying, "But you weren't supposed to do that" is no defense against an attacker. I thought we got away from the bad practice of treating code as data and vice versa forty years ago. But I guess there's a whole new generation now who didn't learn those lessons the first time, and will now have to learn them for themselves the hard way. :-( -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 7/16/07, Elliotte Harold <elharo@...> wrote: > > > > > > > John D. Heintz wrote: > > Elliotte, > > > > You are mistaken. JSON has no place for code - it is only a data format. > > JSON specifically is a sub-set of JavaScript. (And also a sub-set of > > YAML...) > > If that were true, there wouldn't be a problem. It isn't. > > > You may want to actually read http://json.org/ before continuing this > > diatribe... > > > > Please assume that I have read the and quite a bit more many times. You clearly haven't read the code at json.org. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
* Elliotte Harold <elharo@...> [2007-07-17 01:00]: > You're not supposed to put stuff arbitrary JavaScript into > JSON, but people can and do. Then it’s not JSON anymore and JSON parsers will choke on it. JSON is a computation-free subset of Javascript. (If you wanted to parse it using `eval` in Javascript, you’d need something like Perl’s `Safe` module, which lets the host code pick VM ops to allow or forbid prior to invoking untrusted code.) > eval-able formats are dangerous. Formats that mix data and code > are dangerous. That's the bottom line. Billion laughs. And before you retort, consider whether anything you say is inapplicable to JSON. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> >> I still say to meet this constraint PUT must be given the same >> meaning in HTTP as STOR has in FTP but I will stand on my two >> earlier explanations for why this must be so, and not repeat >> myself here. > >In another message you said: > >> My application uses one URL and content negotiation to serve >> four different "text/html" representations and three >> "application/xhtml+xml" representations (plus one Atom and >> one PDF) depending on client capability, so I would have to >> say that I am keenly aware of the separation between the HTTP >> resource/representation model and the file-centric models of >> FTP and WebDAV. > Just because HTTP has resources and representations does not change my view that PUT must be given the same *replacement semantics* as STOR. PUT is not meant to have merge semantics, any more than STOR is meant to have merge semantics, this should not be misconstrued as meaning that I believe a PUT must replace a file on a filesystem a la FTP or WebDAV. Would you like me to re-word my prior post? "I still say to meet this constraint PUT must be given replacement semantics in HTTP just as STOR must be given replacement semantics in FTP." Nowhere do I imply this has anything to do with overwriting a file on disk, that is not what "replacement semantics" means by any stretch. Replacing a file on disk vs. changing umpteen database fields is an implementation detail, not different semantics being applied to the method used. -Eric
Eric J. Bowman wrote: > Just because HTTP has resources and representations does not change my > view that PUT must be given the same *replacement semantics* as STOR. > PUT is not meant to have merge semantics, any more than STOR is meant > to have merge semantics, this should not be misconstrued as meaning > that I believe a PUT must replace a file on a filesystem a la FTP or > WebDAV. Would you like me to re-word my prior post? > ... PUT in WebDAV has exactly the same semantics as in HTTP. > ... Best regards, Julian
Hello,
I was wondering if there is a standard for the HTTP response as XML?
So the HTTP protocol returns a HTTP response with a header and entity
body. Is there a standard XML form of this total response?
For example:
<HttpResponse>
<Header>
<Name>Location</Name>
<Value>http://test/test1</Value>
</Header>
<Body>
...
</Body>
</HttpResponse>
Regards,
Roger van de Kimmenade
rogervdkimmenade wrote: > Hello, > > I was wondering if there is a standard for the HTTP response as XML? There's already a standard for encoding HTTP as text (HTTP itself). Someone encoding it as XML presumably has a good reason to do so. Someone else encoding it as XML may well have such a very different good reason to do so that their encoding would be not only different, but inevitably and irreconcilably different. This is all well and good as long as we have a pivot format so that mapping between n XML HTTP formats is then a matter of n mappings (mapping each of those n formats to the pivot format) rather than n*(n-1) mappings (if we created a mapping for every single pair of such formats. To do this job there is no necessity that the pivot format be in XML, only that it be capable of encoding the entirety of an HTTP message losslessly. HTTP is capable of encoding an entirety of an HTTP message losslessly, and is therefore the ideal candidate. Therefore we don't need a standard for HTTP-in-XML. HTTP-in-XML for a particular reason is another matter, but you haven't given us such a reason yet.
* Eric J. Bowman <eric@...> [2007-07-17 08:35]: > >> I still say to meet this constraint PUT must be given the > >> same meaning in HTTP as STOR has in FTP but I will stand on > >> my two earlier explanations for why this must be so, and not > >> repeat myself here. > > > >In another message you said: > > > >> My application uses one URL and content negotiation to serve > >> four different "text/html" representations and three > >> "application/xhtml+xml" representations (plus one Atom and > >> one PDF) depending on client capability, so I would have to > >> say that I am keenly aware of the separation between the > >> HTTP resource/representation model and the file-centric > >> models of FTP and WebDAV. > > Just because HTTP has resources and representations does not > change my view that PUT must be given the same *replacement > semantics* as STOR. Oh but that’s *exactly* what the division implies. Once you depart from octet-for-octet equivalence (or any other, a priori prescribed and well-specified form of equivalence), it is no longer testable in any meaningful sense of the word whether the server is merging or replacing things. In fact, it would often be *undecidable* whether the server is doing one or the other. Say I submit XML to your server and you make PDF out of it. Let’s further assume that my XML did not contain anything that would specify the page margins. The PDF has the same page margins as the PDF representation of the previous version of the resource. Can you decide whether the server used the margin settings from the previous version of the resource, or used some global default? In the context of a single request? Nope. And that is all there is to this point. But consider… > PUT is not meant to have merge semantics, any more than STOR is > meant to have merge semantics, this should not be misconstrued > as meaning that I believe a PUT must replace a file on a > filesystem a la FTP or WebDAV. Would you like me to re-word my > prior post? > > "I still say to meet this constraint PUT must be given > replacement semantics in HTTP just as STOR must be given > replacement semantics in FTP." > > Nowhere do I imply this has anything to do with overwriting a > file on disk, that is not what "replacement semantics" means by > any stretch. Replacing a file on disk vs. changing umpteen > database fields is an implementation detail, not different > semantics being applied to the method used. … that nowhere did *I* say that PUT should intentionally be made to do unexpected or useless things. If you noticed, I argued quite strongly that PATCH has to be separate from PUT, because that allows the client to express intent more unambiguously. Why is that desirable? Obviously it is so because the server can then react in more useful ways. Conflating PUT and PATCH deprives the client of expressiveness, which in turn limit’s the server’s options in interpreting the request in the client’s interests. However, the server *is* free to interpret PUT requests as it sees fit. A server that does useless things with a PUT request but returns the right status codes isn’t broken; it’s simply useless. There is no problem with this, though: usefulness is a human concept, not a formal one. In the long run, no one will continue using such a server; and that’s the end of its useless but compliant behaviour. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 7/17/07, Jon Hanna <jon@...> wrote: > > rogervdkimmenade wrote: > > Hello, > > > > I was wondering if there is a standard for the HTTP response as XML? > > There's already a standard for encoding HTTP as text (HTTP itself). Jon is correct, but if you do need to serialize to XML for some reason the W3C is actively working on an "HTTP Vocabulary in RDF": http://www.w3.org/TR/HTTP-in-RDF/ Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
[ Attachment content not displayed ]
Alan Dean wrote: > Jon is correct, but if you do need to serialize to XML for some reason > the W3C is actively working on an "HTTP Vocabulary in RDF": > > http://www.w3.org/TR/HTTP-in-RDF/ Possibly restraining the RDF-XML serialisation in some way. If you want to be able to build something easily on top of an XML parser, rather than on the triples generated from an RDF-XML parser, then RDF-XML would be rather unweildly otherwise (so many different possible encodings of the same thing). Precisely what I was thinking of when I said that two people with a good reason for encoding HTTP in XML could have good reasons for taking incompatible approaches.
Hi,
I am a staff programmer for a lab at UCLA. This is my first time
posting to the group. In our lab we are trying to go with a web-based
approach to expose some of the image processing functionality being
developed.
Some examples of what the processing would output are transformations
on the images themselves and some sort of feature extraction. There
seems to be a high need to have a mechanism that will allow multiple
image processors operate on an input in a chain.
As an experiment I wanted to build a JSON-formated, RESTful image
resize service. Can you guys suggest what it would look like? I am a
little confused on how something that is very verb based like resizing
can be modeled as nouns.
I would like to use multipart/mixed to bundle JSON data with multiple
binary images.
I wanted to use CID's, to reference files from the JSON, but I am not
sure how to make web-browsers support that. To compensate I am just
referencing files in order of the message...
Here's what I am thinking:
POST /resizedImages/ HTTP/1.1
Content-Length: 550818
Content-Type: multipart/form-data; boundary=BbC04y
--BbC04y
{ "entries": [
{ "image":1, "resized-width":"640" },
{ "image":0, "resized-width":"480", "resized-height":"640" },
]
}
--BbC04y
Content-Disposition: form-data; name="image"; filename="file2.gif"
Content-Type: image/gif
...contents of file2.gif...
--BbC04y
Content-Disposition: form-data; name="image"; filename="file1.jpg"
Content-Type: image/jpeg
...contents of file1.jpg...
Any suggestions will be appreciated.
Cheers,
Joe
I suggest you do some research on the Khoros system developed by UNM
back in 1990-91. It would have been a significant system if the PIs
hadn't succumbed to greed.
http://rab.ict.pwr.wroc.pl/khoros_root/topmost_toc.html
The legacy bits are now known as AccuSoft VisiQuest.
http://www.accusoft.com/products/visiquest/overview.asp
though it does not appear to be much of an improvement over the
original system. It was originally distributed in source code form,
so you might be able to get the original code from someone else
in your research specialty.
Cheers,
Roy T. Fielding <http://roy.gbiv.com/>
Chief Scientist, Day Software <http://www.day.com/>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "uclajoekim" == uclajoekim <joe.kim@...> writes:
uclajoekim> As an experiment I wanted to build a JSON-formated,
uclajoekim> RESTful image resize service. Can you guys suggest
uclajoekim> what it would look like? I am a little confused on
uclajoekim> how something that is very verb based like resizing
uclajoekim> can be modeled as nouns.
Verbs? Is it about retrieving?
It could just be:
GET /images/12345.jpg?size=640x480
Or to make it more cacheable:
GET /images/640x480/12345.jpg
First come up with a set of nice URLs. That might help.
And sorry if I completely misunderstood the request.
- --
All the best,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGnWlrIyuuaiRyjTYRAjp1AKC65KeJ4db/xjtz2DahkvajTEtZ8ACghv8c
HKWMTsoSsNvoXuVGFtO5K04=
=DERf
-----END PGP SIGNATURE-----
> > PUT in WebDAV has exactly the same semantics as in HTTP. > Yes, the semantics of "replace", exactly the point I have been trying to make. This does not mean that in HTTP, a file must be replaced in the filesystem, a la WebDAV. -Eric
Eric J. Bowman wrote: > > > > > > PUT in WebDAV has exactly the same semantics as in HTTP. > > > > Yes, the semantics of "replace", exactly the point I have been trying > to make. This does not mean that in HTTP, a file must be replaced in > the filesystem, a la WebDAV. Ok, I'll repeat it once more: PUT in WebDAV has exactly the same semantics as in HTTP. That is, WebDAV does not change the definition of PUT. PUT is what RFC2616 says it is, nothing else. Best regards, Julian
> > Oh but that’s *exactly* what the division implies. Once you > depart from octet-for-octet equivalence (or any other, a priori > prescribed and well-specified form of equivalence), it is no > longer testable in any meaningful sense of the word whether the > server is merging or replacing things. In fact, it would often > be *undecidable* whether the server is doing one or the other. > What happens on the server is opaque to the client. If the user requests that the representation they have received, be replaced by the representation they PUT to that URI, the next GET of that URI should reflect the replacement that was requested. It simply does not matter how the server honors that request, so long as that request is honored. If the server merged something, so be it, provided that from the client perspective something appears to have been replaced. You must look at this from the client perspective. Again, HTML resources on my server are derived from Atom files. If I accept a PUT of the HTML representation, then the server is expected to alter the Atom file in such a way that a subsequent GET of the same HTML representation reflects the changes the user requested -- as transformed from the server-altered Atom source file. I do not claim that this must be a full replacement, just as I do not claim that a partial replacement is the same thing as a merge. The client did not request a replacement of the Atom file, the request was to change the HTML representation derived from that Atom file. When that HTML representation is next retrieved, it reflects the change the user requested, without the server ever altering any HTML code. Because, you see, that HTML is merely the cached output stream of an XSLT transformation, not a file which can be replaced -- a la FTP or WebDAV. The client is not required to alter the Atom source directly, the server takes care of that implementation detail. The result of the PUT interaction must have replacement semantics, not merge semantics, as judged from the client perspective on a subsequent GET. Such semantics would, from the client perspective, be indistinguishable from having used FTP STOR to replace that HTML representation, assuming a different system where the HTML representation is a file saved on a disk. We are talking about the semantics of the user interaction here, not the opaque inner workings of the server-side application. > > Say I submit XML to your server and you make PDF out of it. > Let’s further assume that my XML did not contain anything that > would specify the page margins. The PDF has the same page margins > as the PDF representation of the previous version of the > resource. Can you decide whether the server used the margin > settings from the previous version of the resource, or used some > global default? In the context of a single request? > If the PDF was not modified, as in the client requested that some snippet of XML be PUT to some URI, the PDF derived from that snippet is still subject to the same transformation on the server as the last XML snippet PUT to that URI. Why should a PUT of new XML content change the page margins of a derived PDF, when that PDF was not the subject of the PUT request? If the user interaction desired, is a change to the margins of the PDF representation, and such a change is allowed by the server then it would make much more sense to PUT a new PDF representation wouldn't it? Since the margins of the PDF representation have nothing to do with the source content used to create the PDF. > > Conflating PUT and PATCH deprives the > client of expressiveness, which in turn limit’s the server’s > options in interpreting the request in the client’s interests. > IOW, you're agreeing with me again. Assigning merge semantics to PUT breaks the Uniform Interface constraint of REST. > > However, the server *is* free to interpret PUT requests as it > sees fit. > Not if the result of a subsequent GET for the resource in question yields some representation which has merged the PUT request into what was there before, instead of replacing something -- in that case, the Uniform Interface constraint is violated and the app is not RESTful. -Eric
> >>> PUT in WebDAV has exactly the same semantics as in HTTP. >>> > >> Yes, the semantics of "replace", exactly the point I have been trying >> to make. This does not mean that in HTTP, a file must be replaced in >> the filesystem, a la WebDAV. > > Ok, I'll repeat it once more: PUT in WebDAV has exactly the same > semantics as in HTTP. That is, WebDAV does not change the definition of > PUT. PUT is what RFC2616 says it is, nothing else. > You've lost me again, Julian. You are saying I am wrong about what, now? Did I somewhere make a claim that WebDAV's use of PUT has semantics other than replace? Are you saying that somehow, the semantics of a WebDAV PUT are different from the semantics of an FTP STOR? Where have I claimed that an HTTP PUT should have different semantics than a WebDAV PUT? Your point eludes me here. HTTP PUT == WebDAV PUT == FTP STOR == replacement semantics. Not merge semantics! This is the essence of a Uniform Interface, is it not? -Eric
Eric J. Bowman wrote: > > > > > >>> PUT in WebDAV has exactly the same semantics as in HTTP. > >>> > > > >> Yes, the semantics of "replace", exactly the point I have been trying > >> to make. This does not mean that in HTTP, a file must be replaced in > >> the filesystem, a la WebDAV. > > > > Ok, I'll repeat it once more: PUT in WebDAV has exactly the same > > semantics as in HTTP. That is, WebDAV does not change the definition of > > PUT. PUT is what RFC2616 says it is, nothing else. > > > > You've lost me again, Julian. You are saying I am wrong about what, > now? Did I somewhere make a claim that WebDAV's use of PUT has > semantics other than replace? Are you saying that somehow, the You said (look up a few lines): "This does not mean that in HTTP, a file must be replaced in the filesystem, a la WebDAV." So it seemed that you *were* saying that WebDAV PUT != HTTP PUT. > ... Best regards, Julian
Julian Reschke wrote: > So it seemed that you *were* saying that WebDAV PUT != HTTP PUT. [Concrete use of HTTP] [Method] != HTTP [Method] It is reasonable to assume that any given correct use of HTTP will go beyond HTTP. One of the rules of my personal site is that a GET on http://www.hackcraft.net/images/forestr/ will return a particular image that is used as a background detail on a few pages. I haven't broken GET by imposing a rule beyond those in HTTP, I've just defined how I use it. WebDAV uses HTTP in a particular way. It's perfectly reasonable to expect it's use of PUT to be compliant with HTTP's definition of PUT (it's a bug otherwise) but completely unreasonable to expect how it uses PUT to define how all HTTP applications use PUT. I think the semantics of PUT are neither "replace" nor "merge" but rather "assert". The semantics of GET are also assert. Client is interested in a resource, GET causes the server to return a representation of the resource. Server is therefore making assertions about that resource. Client holds an opinion about the resource, PUT causes the client to send a representation about the resource. Client is therefore making assertions about that resource. "Replace" and "merge" don't come into it. If our resources are files then they might (and in that case "replace" is the only sensible interpretation, but that still doesn't tell us whether one or many files are replaced). With the vast majority or requests though the resource is not a computer file. "Replace" and "merge" don't even have any real meaning with many resources.
Jon Hanna wrote: > > > Julian Reschke wrote: > > So it seemed that you *were* saying that WebDAV PUT != HTTP PUT. > > [Concrete use of HTTP] [Method] != HTTP [Method] Yes. > WebDAV uses HTTP in a particular way. It's perfectly reasonable to > expect it's use of PUT to be compliant with HTTP's definition of PUT > (it's a bug otherwise) but completely unreasonable to expect how it uses > PUT to define how all HTTP applications use PUT. It may be unreasonable to *expect* that, but that's how it is defined (or rather not defined, as RFC2518/RFC4918 do not modify the definition of PUT at all). Can we please distinguish between "WebDAV as specified" (in the RFCs), and "WebDAV as used in many popular servers"? > ... Best regards, Julian
[ Attachment content not displayed ]
uclajoekim wrote:
> Hi,
>
> I am a staff programmer for a lab at UCLA. This is my first time
> posting to the group. In our lab we are trying to go with a web-based
> approach to expose some of the image processing functionality being
> developed.
>
> Some examples of what the processing would output are transformations
> on the images themselves and some sort of feature extraction. There
> seems to be a high need to have a mechanism that will allow multiple
> image processors operate on an input in a chain.
>
> As an experiment I wanted to build a JSON-formated, RESTful image
> resize service. Can you guys suggest what it would look like? I am a
> little confused on how something that is very verb based like resizing
> can be modeled as nouns.
>
> I would like to use multipart/mixed to bundle JSON data with multiple
> binary images.
>
> I wanted to use CID's, to reference files from the JSON, but I am not
> sure how to make web-browsers support that. To compensate I am just
> referencing files in order of the message...
>
> Here's what I am thinking:
>
> POST /resizedImages/ HTTP/1.1
> Content-Length: 550818
> Content-Type: multipart/form-data; boundary=BbC04y
> --BbC04y
> { "entries": [
> { "image":1, "resized-width":"640" },
> { "image":0, "resized-width":"480", "resized-height":"640" },
> ]
> }
> --BbC04y
> Content-Disposition: form-data; name="image"; filename="file2.gif"
> Content-Type: image/gif
>
> ...contents of file2.gif...
> --BbC04y
> Content-Disposition: form-data; name="image"; filename="file1.jpg"
> Content-Type: image/jpeg
>
> ...contents of file1.jpg...
>
> Any suggestions will be appreciated.
>
> Cheers,
> Joe
>
Hi Joe,
By coincidence, I brought up a related issue this morning on a GIS
discussion list:
http://groups.google.com/group/geo-web-rest/t/7a2aec463ae4a2cb
Here's the way remote processing (vector or raster) is done now in the
GIS industry:
- Post data to a RPC endpoint, along with processing instructions
encoded in XML;
- Get transformed data in response.
The alternative is a to take an approach that is a bit like Yahoo Pipes.
Create a processing resource at a URL like
http://example.com/processing/resize
and then pull imagery through it like
GET
http://example.com/processing/resize?image=http://zcologia.com/images/kirok.jpg&width=200&height=200
Cheers,
Sean
Just use the XML-HTTP RDF vocabulary mentioned, and create yourself an XML Crystallization of the RDF graph [2] so that you can use your xml database. Since the HTTP vocab is already invented you just need to work on crystalizing it for your DB. Then you get a good semantics, plus a nice xml syntax. Henry [1] http://www.w3.org/TR/HTTP-in-RDF/ [2] http://blogs.sun.com/bblfish/entry/crystalizing_rdf On 18 Jul 2007, at 12:44, Roger van de Kimmenade wrote: > The reason i need the HTTP response in XML is the use of an XML > database. > With this database it is possible to do a POST using an XQuery. > However the XQuery should return XML that contains the result of > the POST. > I was looking for a standard that already does this instead of > defining my own. > > Roger > > > On 7/17/07, Jon Hanna < jon@...> wrote: > rogervdkimmenade wrote: > > Hello, > > > > I was wondering if there is a standard for the HTTP response as XML? > > There's already a standard for encoding HTTP as text (HTTP itself). > > Someone encoding it as XML presumably has a good reason to do so. > Someone else encoding it as XML may well have such a very different > good > reason to do so that their encoding would be not only different, but > inevitably and irreconcilably different. > > This is all well and good as long as we have a pivot format so that > mapping between n XML HTTP formats is then a matter of n mappings > (mapping each of those n formats to the pivot format) rather than > n*(n-1) mappings (if we created a mapping for every single pair of > such > formats. > > To do this job there is no necessity that the pivot format be in XML, > only that it be capable of encoding the entirety of an HTTP message > losslessly. HTTP is capable of encoding an entirety of an HTTP message > losslessly, and is therefore the ideal candidate. > > Therefore we don't need a standard for HTTP-in-XML. > > HTTP-in-XML for a particular reason is another matter, but you haven't > given us such a reason yet. > Home page: http://bblfish.net/ Sun Blog: http://blogs.sun.com/bblfish/ Foaf name: http://bblfish.net/people/henry/card#me
Jon, this is very nice. GET == server assertion, PUT == client assertion And does a good job of pointing a way thru the thicket we explored a couple years ago involving PUT and content negotiation * PUT a TIFF image * GET w/ content negotiation to request a JPG, and voila, a different bytestream comes back. Client asserted state of resource was described by the TIFF. Server "believed" that and then "derived" add'l state available via content negotiation. I like it ... -Lee _____ From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jon Hanna Sent: Wednesday, July 18, 2007 4:00 AM To: Rest List Subject: Re: [rest-discuss] Re: To PUT things right Julian Reschke wrote: > So it seemed that you *were* saying that WebDAV PUT != HTTP PUT. [Concrete use of HTTP] [Method] != HTTP [Method] It is reasonable to assume that any given correct use of HTTP will go beyond HTTP. One of the rules of my personal site is that a GET on http://www.hackcraf <http://www.hackcraft.net/images/forestr/> t.net/images/forestr/ will return a particular image that is used as a background detail on a few pages. I haven't broken GET by imposing a rule beyond those in HTTP, I've just defined how I use it. WebDAV uses HTTP in a particular way. It's perfectly reasonable to expect it's use of PUT to be compliant with HTTP's definition of PUT (it's a bug otherwise) but completely unreasonable to expect how it uses PUT to define how all HTTP applications use PUT. I think the semantics of PUT are neither "replace" nor "merge" but rather "assert". The semantics of GET are also assert. Client is interested in a resource, GET causes the server to return a representation of the resource. Server is therefore making assertions about that resource. Client holds an opinion about the resource, PUT causes the client to send a representation about the resource. Client is therefore making assertions about that resource. "Replace" and "merge" don't come into it. If our resources are files then they might (and in that case "replace" is the only sensible interpretation, but that still doesn't tell us whether one or many files are replaced). With the vast majority or requests though the resource is not a computer file. "Replace" and "merge" don't even have any real meaning with many resources.
Robert Sayre wrote: >> Please assume that I have read the and quite a bit more many times. > > You clearly haven't read the code at json.org. > Robert, Please stop claiming things of which you have no knowledge. I have read that spec, in depth, every word, multiple times. I have read large parts of the Java source code published at that site. That you don't happen to like the conclusions I came to after very carefully reading the spec and code multiple times, listening to Crockford talk about it, and working with JSON, does not mean that I did not read it. You can say anything you like. You can say that you can fly like Superman. That doesn't make it true. The spec can say that "JSON is a text format that is completely language independent" but that isn't true either. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
A. Pagaltzis wrote: > * Elliotte Harold <elharo@...> [2007-07-17 01:00]: >> You're not supposed to put stuff arbitrary JavaScript into >> JSON, but people can and do. > > Then it’s not JSON anymore and JSON parsers will choke on it. > JSON is a computation-free subset of Javascript. > Crackers don't play by the rules. They do not send only well-formed messages that adhere to the spec. Secure software has to be ready for absolutely any input, not just input that follows the spec. That XML is so complex that you really need a true parser to handle it is a feature, not a bug. It discourages and mostly prevents the use of porr quality, hand-written solutions to handle it. Even in the rare cases where the solutions are hand-written, they're typically based on non-Turing complete regex's. No one takes an arbitrary XML document and throws it into a JavaScript interpreter. People do this with JSON all the time, and the language was deliberately designed to make this possible. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
A. Pagaltzis wrote: > However, the server *is* free to interpret PUT requests as it > sees fit. A server that does useless things with a PUT request > but returns the right status codes isn’t broken; it’s simply > useless. There is no problem with this, though: usefulness is > a human concept, not a formal one. In the long run, no one will > continue using such a server; and that’s the end of its useless > but compliant behaviour. But what are the right status codes? And under what conditions are servers allowed to return them? That's the rub. If the server returns 200 OK or 201 CREATED what may the client reasonably infer from that response about what the server has done? -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 7/23/07, Elliotte Harold <elharo@...> wrote: > A. Pagaltzis wrote: > > > However, the server *is* free to interpret PUT requests as it > > sees fit. A server that does useless things with a PUT request > > but returns the right status codes isn't broken; it's simply > > useless. There is no problem with this, though: usefulness is > > a human concept, not a formal one. In the long run, no one will > > continue using such a server; and that's the end of its useless > > but compliant behaviour. > > But what are the right status codes? And under what conditions are > servers allowed to return them? That's the rub. If the server returns > 200 OK or 201 CREATED what may the client reasonably infer from that > response about what the server has done? It can only infer that the server feels it has met the obligations in the HTTP request as defined by the HTTP protocol. In the case of PUT, that means that the server feels that the action it has taken sets the state of the targetted resource to that represented in the request. The client can't infer anything else about what the server might or might not have done, otherwise it would be more tightly coupled to that particular server. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
I'm working on a cache invalidation pattern that would use the Cache-Control:no-cache header in GET/HEAD requests to force the origin server to generate a fresh version of the resource. However, in my recent searching, I've not seen any clear indication that using the Cache-Control header is allowed/understood for GET/HEAD. Am I heading down the wrong road with this? Is there a more 'standard' pattern for accomplishing the same task? My regrets of this is way off topic. Please feel free to re-direct me if there's a better resource for my question. TIA MikeA
mike amundsen wrote: > However, in my > recent searching, I've not seen any clear indication that using the > Cache-Control header is allowed/understood for GET/HEAD. No need to search, it was all in RFC 2616. Examine the cache-request-directive production.
Jon: Doh! - The RFC! Thanks for the pointer. I've re-read 14.9 and 14.32 and it's clear to me now that I need to implement support for both Cache-Control:no-cache and Pragma:no-cache. MikeA On 7/23/07, Jon Hanna <jon@...> wrote: > mike amundsen wrote: > > However, in my > > recent searching, I've not seen any clear indication that using the > > Cache-Control header is allowed/understood for GET/HEAD. > > No need to search, it was all in RFC 2616. > > Examine the cache-request-directive production. >
Hi guys Please help me find a proper documentation for REST API. Thanks Sirisha
sirisha_tsnl_1984 wrote: > Hi guys > > Please help me find a proper documentation for REST API. Short answer: http://www.ietf.org/rfc/rfc2616.txt and http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm Long answer. There isn't actually a single REST API, just like there isn't a single object-oriented API, or single client-server API or a single "agile" API. REST is an architectural style. As such it informs us of a way of doing things, rather than defining an API to code against. The web is one example of a system that uses the REST style. Because the web uses the REST style, applications that work with the web are arguably (certainly most people on this list would so argue) advised to make use of it.
Okay, probably not. But I'm kind of puzzled, at least: What do you do with your garden-variety "Email this" link? It's not changing stored server state, it's just... well, it's just emailing this (where "this" is a stored mailing-list post, on the assumption that the requesting, logged-in subscriber didn't get it or accidentally deleted it, and wants to have another copy). I'm guessing it's an imaginary resource with only a POST function (you can't GET anything that will meaningfully tell you it's already been sent or not), but that seems wrong somehow.
On Jul 26, 2007, at 3:51 AM, sirisha_tsnl_1984 wrote: > Hi guys > > Please help me find a proper documentation for REST API. > > Thanks > Sirisha While there isn't a REST API, there are some web sites that are either in part or dedicated to applying REST principles and providing concrete examples of what it means. They are a good resource to get started at least. http://wiki.opengarden.org/REST http://restpatterns.org/ http://www.simplewebservices.org/index.php?title=Main_Page Cheers, - Steve -------------- Steve G. Bjorg http://www.mindtouch.com http://www.opengarden.org
I was just looking through the OpenId specs [1] and came across the
attribute exchange draft [2], which is a way as I understand it of
getting and setting property value pairs for describing people
associated with an OpenID.
Though I like OpenId for its nice Resource Oriented Architecture, I
was a bit concerned about that draft. It is really not doing things
right, I think.
Here are some of the faults I found with it (see comment in [1]):
(1) It ties the identity provider to the identity. The nice thing
about OpenId, is that it separates the role of the identity provider
and the identity. This allows one to have an id (I could use http://
bblfish.net/) and change identity provider over time, as I change job
for example, or even have a number of different ones at the same
time. The OpenId attribute exchange is overloading the identity
provider (which is really an identity verifier) functionality
relating to identity description.
(2) It does not feel RESTful. If something is to return information
it should have a URL. Here there is very clearly overlapping of
concerns as explained above. What is the url for information for one
identity here? I have a large alarm bell ringing when I read sections
such as: "Fetch message" and "store message". Is that not the
equivalent of HTTP GET and PUT?
(3) duplicating effort. This spec is inventing a metadata format, a
query language and storage API, which is a lot of work. These things
have been done before:
-a- metadata framework: as shown above RDF does this very well
already. It has a very powerful semantics, has gone through years of
review by some of the best thinkers in the world, is extensible, self
describing, etc, etc... having to learn another special convention
as proposed here, is one more unnecessary piece of work.
-b- query language: SPARQL though not yet finished does
everything that is needed here as shown in the example given at [1]
-c- storage: this could be done using a number of well known
technologies, such as ftp, scp, Atom Protocol, or even WebDav. AtomP
and WebDav are even nicely RESTful.
It kind of shows I think how one has to be careful not to accept the
work of one group as good of the bat, just because they did excellent
work before hand. Am I being unjust?
Henry
PS. Now criticism (1) above is a little tricky perhaps because if
the Identity Provider has no say
over the OpenId resource, and that is used to point to personal
information, then the open id could be describing itself in ways that
would be completely unacceptable to the organisation controlling the
Identity Provider. So there may be a good reason to have it have some
control over the OpenId, or for one to want to ask it questions
regarding the identity of the person associated with that id.
In my view what will happen is that in the end all identity providers
will have control of their openid they give out. So as Sun gives out
http://openid.sun.com/bblfish so visa will give out
http://visa.com/iod/1231341 and there will then be my personal openid
foaf file, which will link them all together (if I want to).
[1] http://blogs.sun.com/bblfish/entry/foaf_openid
[2] http://openid.net/specs/openid-attribute-exchange-1_0-05.html
Home page: http://bblfish.net/
Sun Blog: http://blogs.sun.com/bblfish/
Foaf name: http://bblfish.net/people/henry/card#me
Karen <karen.cravens@...> writes:
> Okay, probably not. But I'm kind of puzzled, at least:
>
> What do you do with your garden-variety "Email this" link?
>
> It's not changing stored server state, it's just... well, it's just
> emailing this (where "this" is a stored mailing-list post, on the
> assumption that the requesting, logged-in subscriber didn't get it or
> accidentally deleted it, and wants to have another copy). I'm guessing
> it's an imaginary resource with only a POST function (you can't GET
> anything that will meaningfully tell you it's already been sent or
> not), but that seems wrong somehow.
REST isn't something to be religious about... it's a set of guidelines
(ARRR!!) for doing webapps.
The "email this..." is a classic gateway case, but it also has almost
no implications.
As long as your interactions with a resource are safe and idempotent
then it's ok to use GET... and surely:
GET /emailthis?address=karen@...
would be safe and idempotent would it not?
--
Nic Ferrier
http://www.tapsellferrier.co.uk
On 7/26/07, Nic James Ferrier <nferrier@...> wrote: > As long as your interactions with a resource are safe and idempotent > then it's ok to use GET... and surely: > > GET /emailthis?address=karen@... > > would be safe and idempotent would it not? From the server's standpoint, probably... though on a practical level I don't think having the beastie email you a package of stuff every time your cache software pre-fetches all the links qualifies as "safe," so I imagine I'd go with a dummy POST just on that basis. On an only-peripherally-related note: anybody got magic CSS or something that can make form links like that look like regular links? It's a bit inconsistent, having "underscore-link, underscore-link, big-boxy-button" when it's a form that has no other (non-hidden) fields, plus the whole form-as-block-type thing makes the layout messy. I suppose I could make all the GETs form-based too (at least, I assume browsers can do that; I can't say I've tried) but that seems wrong too.
Karen wrote: > On an only-peripherally-related note: anybody got magic CSS or > something that can make form links like that look like regular links? I tend to use: <a href="confirmation_page_with_submit_button" onclick="return function_returns_false_if_successfully_does_a_POST_and_true_otherwise()">Clicky!</a> Naming convention not shown drawn to scale!
Henry, hello. On 26 Jul 2007, at 15:32, Henry Story wrote: > (2) It does not feel RESTful. If something is to return information > it should have a URL. Here there is very clearly overlapping of > concerns as explained above. What is the url for information for one > identity here? I have a large alarm bell ringing when I read sections > such as: "Fetch message" and "store message". Is that not the > equivalent of HTTP GET and PUT? > -b- query language: SPARQL though not yet finished does > everything that is needed here as shown in the example given at [1] Disclaimer: I've looked through the attribute exchange draft, but haven't studied it; the following is not, I think, dependent on the fine details. My feeling is that SPARQL is not a good fit here, because this might be a case where a RESTful approach is not ideal. Attribute exchange is pretty fundamentally a conversation: if I want to get a set of Ignatius's attributes from you, I have to assure myself of who you are, but you also have to either (a) assure yourself of who I am, and that I am permitted to get all of Ignatius's attributes, or (b) I have some other legitimate need to get this information, which fits in with a pre-existing policy: me: tell me Ignatius's age you: why do you want to know? me: I'll only sell him drink if he's over 18 you: OK: he's over 18, but I'm not telling you his birthday me: that'll do When fully general, this turns into a decidedly non-trivial problem. The simpler case which seems to be suggested in OpenID docs and the attribute exchange draft, of allowing the (human) OpenID owner to choose on the fly what attributes are released to a relying party, still involves a three-way interaction, between the relying party, the IdP and the human. Although you can characterise that as a GET which might take a long while to be retrieved (when will the user come back from coffee?), it feels a bit forced to me. It's doable, but it's not clear how `this is who I am and why I want your birthday', or even just `this is who I am', would be included in the SPARQL request. The case which you describe in your (excellent) [1] is, I think, the most basic case, where all the attributes are available without any policy at all, and the problem is simply how does one associate Ignatius's FOAF file with his OpenID. Now, OpenID is about keeping things simple, and it might be deemed valuable to keep things precisely this simple; in that case, a pointer to a FOAF file would indeed be hugely simpler than a new protocol. > [1] http://blogs.sun.com/bblfish/entry/foaf_openid Separately: > PS. Now criticism (1) above is a little tricky perhaps because if > the Identity Provider has no say > over the OpenId resource, and that is used to point to personal > information, then the open id could be describing itself in ways that > would be completely unacceptable to the organisation controlling the > Identity Provider. So there may be a good reason to have it have some > control over the OpenId, or for one to want to ask it questions > regarding the identity of the person associated with that id. > > In my view what will happen is that in the end all identity providers > will have control of their openid they give out. So as Sun gives out > http://openid.sun.com/bblfish so visa will give out > http://visa.com/iod/1231341 and there will then be my personal openid > foaf file, which will link them all together (if I want to). Can you elaborate on this? Do you just mean that sun.com might not want openid.sun.com/bblfish to be able to say 'My politics are X'? My first reaction to such a prohibition would be that it misunderstands what (I think) OpenID is clever about. OpenID avoids some of X.509's problems by _not_ making any link between online and offline entities. So, pace Tim Bray in [a], all an OpenID says is that its owner is the same person over time; the IdP doesn't warrant that the name means anything. The IdP doesn't even warrant that the attributes it's supplying are true, simply that these are honestly what the offline human told it to say. Or am I missing your point? Best wishes, Norman [a] http://www.tbray.org/ongoing/When/200x/2007/02/24/OpenID#p-3 -- ------------------------------------------------------------ Norman Gray : http://nxg.me.uk eurovotech.org : University of Leicester, UK
On 7/26/07, Jon Hanna <jon@...> wrote: > Karen wrote: > > On an only-peripherally-related note: anybody got magic CSS or > > something that can make form links like that look like regular links? Sorry, I should have clarified: "...that works for non-Javascript browsers." There's all sorts of magic (like not overloading POST!) I'll get to do when I do the rich version, but I'm not there yet.
Karen wrote: > Sorry, I should have clarified: "...that works for non-Javascript > browsers." There's all sorts of magic (like not overloading POST!) > I'll get to do when I do the rich version, but I'm not there yet. For the most part I'm happy that if there's no javascript then the browser would go to the page called confirmation_page_with_submit_button in my example, esp. since the javascript in question often has some sort of confirmation check itself. In cases where I've really wanted to make the difference between a link and submit invisible I've made use of the fact that an image input and an image link have very little difference whether shown as images (some browsers use different cursors, but that's easily changed with CSS) or as alt text.
On 7/26/07, Jon Hanna <jon@...> wrote: > In cases where I've really wanted to make the difference between a link > and submit invisible I've made use of the fact that an image input and > an image link have very little difference whether shown as images (some > browsers use different cursors, but that's easily changed with CSS) or > as alt text. Oh, no invisible submits either. And images are mostly a no-go, though with appropriate labeling they might work. (Yes, I'm just full of restrictions, aren't I?) How about: "It has to work in Lynx," since that's my test platform for the vanilla version.
On 7/26/07, Karen <karen.cravens@...> wrote: > How about: "It has to work in Lynx," since that's my test platform for > the vanilla version. Of course, the CSS isn't going to affect Lynx, but on the other hand the difference between the submit button and the clicky link isn't so extreme there either, so I'm good with that.
Karen: another possibility is to CSS the links to look like buttons. mamund On 7/26/07, Karen <karen.cravens@...> wrote: > On 7/26/07, Jon Hanna <jon@...> wrote: > > Karen wrote: > > > On an only-peripherally-related note: anybody got magic CSS or > > > something that can make form links like that look like regular links? > > Sorry, I should have clarified: "...that works for non-Javascript > browsers." There's all sorts of magic (like not overloading POST!) > I'll get to do when I do the rich version, but I'm not there yet. > > > > Yahoo! Groups Links > > > > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
Thanks for the feedback Norman. I am forwarding this back to the
openid group, because your responses have really helped me understand
the issues better.
By the way there is a really useful pdf book on OpenId that is
available for review currently, before it goes to the printer:
http://www.openidbook.com
On 26 Jul 2007, at 18:26, Norman Gray wrote:
> Henry, hello.
>
> On 26 Jul 2007, at 15:32, Henry Story wrote:
>
>> (2) It does not feel RESTful. If something is to return information
>> it should have a URL. Here there is very clearly overlapping of
>> concerns as explained above. What is the url for information for one
>> identity here? I have a large alarm bell ringing when I read sections
>> such as: "Fetch message" and "store message". Is that not the
>> equivalent of HTTP GET and PUT?
>
>> -b- query language: SPARQL though not yet finished does
>> everything that is needed here as shown in the example given at [1]
>
> Disclaimer: I've looked through the attribute exchange draft, but
> haven't studied it; the following is not, I think, dependent on the
> fine details.
Neither had I. Chapter 6 of the book I mentioned above goes over it
in easier language.
> My feeling is that SPARQL is not a good fit here, because this
> might be a case where a RESTful approach is not ideal.
>
> Attribute exchange is pretty fundamentally a conversation: if I
> want to get a set of Ignatius's attributes from you, I have to
> assure myself of who you are, but you also have to either (a)
> assure yourself of who I am, and that I am permitted to get all of
> Ignatius's attributes, or (b) I have some other legitimate need to
> get this information, which fits in with a pre-existing policy:
>
> me: tell me Ignatius's age
> you: why do you want to know?
> me: I'll only sell him drink if he's over 18
> you: OK: he's over 18, but I'm not telling you his birthday
> me: that'll do
Hehe. That would indeed be very cool. One could I am sure do that in
SPARQL too, but one would need a relation such as
query:IWillAnswerOnlyIfYouAnswerThisFirst relation.
> When fully general, this turns into a decidedly non-trivial problem.
Indeed... Interesting idea. But I don't think that this is what they
are proposing... as I just realized you admit below.
> The simpler case which seems to be suggested in OpenID docs and the
> attribute exchange draft, of allowing the (human) OpenID owner to
> choose on the fly what attributes are released to a relying party,
> still involves a three-way interaction, between the relying party,
> the IdP and the human.
Ah thanks for pointing that out. I just found that stated in the book
"In general, any time a Consumer requests these additional parameters
for user registration purpose, the identity provider should prompt
the End User before sending these parameters to the Consumer. The End
User should be given a choice which parameters it wants to send to
the Identity Provider"
I had not picked up on this.
> Although you can characterise that as a GET which might take a long
> while to be retrieved (when will the user come back from coffee?),
> it feels a bit forced to me. It's doable, but it's not clear how
> `this is who I am and why I want your birthday', or even just `this
> is who I am', would be included in the SPARQL request.
>
Well I think one could look at this as a form of indirect SELECT
SPARQL query where the
variables in the SELECT get to be chosen by the end user. Ie the
Consumer sends
WHERE {
?p foaf:openid <http://openid.sun.com/bblfish> .
OPTIONAL { ?p foaf:birthday ?bday } .
OPTIONAL { ?p foaf:mbox ?mbox } .
}
and the End User gets to prepend the SELECT clause
SELECT ?bday
if he only wants the birthday to be passed on but not the mbox.
It is not done as a SPARQL request because that would make for URL
redirects that are much too long. So really the protocol is working
with a predefined template SPARQL query, and the
field names are the names of the variables passed around without the
'?'.
Mhh looked at this way, it seems easier to understand to me, and it
makes more sense.
Mind you in that case it should be easy to define how to change the
information on the server: do a PUT of some foaf there. That would
remove the I think clearly unRESTful storage protocol. I mean clearly
here they are reinventing HTTP response codes, with things such as
storage success, storage failure, etc...
Another thought: if one thinks about it, this redirect to the server
has to happen because the end user does not have a rdf database to
query. If he did then the client could query the user directly.
> The case which you describe in your (excellent) [1] is, I think,
> the most basic case, where all the attributes are available without
> any policy at all, and the problem is simply how does one associate
> Ignatius's FOAF file with his OpenID. Now, OpenID is about keeping
> things simple, and it might be deemed valuable to keep things
> precisely this simple; in that case, a pointer to a FOAF file would
> indeed be hugely simpler than a new protocol.
>
>> [1] http://blogs.sun.com/bblfish/entry/foaf_openid
>
:-) ok happy to have contributed something.
I had tried to deal with the problem by having the foaf file return
more or less information. I would have required the end user specify
what should be visible for that Client when entering his password.
There was some hand-waving there, because it was not so clear how one
can identify the server. One would have to:
- give the client an OpenId too, which I was thinking could be
linked to the server_root
- identify the foaf file somehow as being identity dependent,
perhaps by adding a new HTTP header pointing to the login point, so
that the client would know to login for more information
- and of course the foaf file would have to be served by the same
service as the openid authentication.
But that may in the end be more complicated than the query part of
the protocol defined above, and does not feel as clean as the
indirect query idea.
It looks like the idea of querying the end user directly would be the
best, once he has been correctly identified. In that case though the
end user is not that different from a very slow web server.
It would be interesting to think more about a SPARQL based service,
given the flexibility such a language and format make available.
> Separately:
>
>> PS. Now criticism (1) above is a little tricky perhaps because if
>> the Identity Provider has no say
>> over the OpenId resource, and that is used to point to personal
>> information, then the open id could be describing itself in ways that
>> would be completely unacceptable to the organisation controlling the
>> Identity Provider. So there may be a good reason to have it have some
>> control over the OpenId, or for one to want to ask it questions
>> regarding the identity of the person associated with that id.
>>
>> In my view what will happen is that in the end all identity providers
>> will have control of their openid they give out. So as Sun gives out
>> http://openid.sun.com/bblfish so visa will give out
>> http://visa.com/iod/1231341 and there will then be my personal openid
>> foaf file, which will link them all together (if I want to).
>
> Can you elaborate on this? Do you just mean that sun.com might not
> want openid.sun.com/bblfish to be able to say 'My politics are X'?
No but I suppose they might not want it to say "My name is Jonathan
Schwartz" :-)
> My first reaction to such a prohibition would be that it
> misunderstands what (I think) OpenID is clever about. OpenID
> avoids some of X.509's problems by _not_ making any link between
> online and offline entities. So, pace Tim Bray in [a], all an
> OpenID says is that its owner is the same person over time; the IdP
> doesn't warrant that the name means anything. The IdP doesn't even
> warrant that the attributes it's supplying are true, simply that
> these are honestly what the offline human told it to say.
That is a good response. And so having the OpenId be a FoafDocument,
or point to a FoafDocument that is outside of the sphere of influence
of the Identity Provider is ok. As OpenId is specified currently.
But!
Sun stated that all OpenIds starting with http://blogs.sun.com/
bblfish identify Sun employees. They have had to do that verbally
since OpenId by itself does not make any tools available to make this
possible.
Now in my last post I showed how Sun could make this statement with a
few simple additional foaf relations [2], and by creating itself a
corporate foaf file. Essentially you can think of the Identity
Provider service as a group identifier, that can also identify the
members of the group. This would then give a mechanical way for sun
to make a statement about that service. The service could also make
the same point by pointing back in a rdf/xml representation back to
the Sun Microsystems foaf file.
In that situation it is clear that Sun will not want the attributes
on its service endpoint to be all equally updatable, because they are
now making claims about the members of the Sun group, with legal
responsibilities. Neither would Sun want the foaf file linked to from
the openid.sun.com/bblfish to be completely in the hands of the
employee. Some attributes may, but others may not.
So I think I have shown in [2] that OpenId could be used for a lot
more than what it is used for currently, with a little bit of extra
metadata, which is neat.
> Or am I missing your point?
>
> Best wishes,
>
> Norman
>
> [a] http://www.tbray.org/ongoing/When/200x/2007/02/24/OpenID#p-3
>
[2] http://blogs.sun.com/bblfish/entry/a_foaf_file_for_sun
> --
> ------------------------------------------------------------
> Norman Gray : http://nxg.me.uk
> eurovotech.org : University of Leicester, UK
>
>
Karen wrote: > Oh, no invisible submits either. I didn't say the submit was invisible, I said the *difference* between a submit and a link was invisible - exactly what you are looking for. > And images are mostly a no-go, though > with appropriate labeling they might work. (Yes, I'm just full of > restrictions, aren't I?) > > How about: "It has to work in Lynx," since that's my test platform for > the vanilla version. Image elements and inputs of type "image" work with Lynx. One standards-compliant rendering of images is to display the contents of the alt attribute in the appropriate place, and Lynx does that perfectly well. The bigger problem with images is that of people using graphical browsers who have difficulty seeing the particular image in question.
Hi Karen, * Karen <karen.cravens@...> [2007-07-26 16:30]: > What do you do with your garden-variety "Email this" link? REST is all about the client transitioning from application state to application state by interpreting representations of resources that it requests from the server. There’s nothing about such an architecture that would have any trouble modelling “Email this” links. > It's not changing stored server state, it's just... well, it's > just emailing this (where "this" is a stored mailing-list post, > on the assumption that the requesting, logged-in subscriber > didn't get it or accidentally deleted it, and wants to have > another copy). Right. It is a request performed for its side effects. You don’t want the client to be able to be able to disclaim reponsibility for those side effects, so it’s not a safe request. It’s not idempotent either, at least in the usual implementations anyway: if you repeat the request, the server will repeatedly send mail. Therefore, POST is precisely the right method for this. > I'm guessing it's an imaginary resource with only a POST > function (you can't GET anything that will meaningfully tell > you it's already been sent or not), but that seems wrong > somehow. Why? A RESTful server is not a database or file system. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 7/21/07, Elliotte Harold <elharo@...> wrote: > That XML is so complex that you really need a true parser to handle it > is a feature, not a bug. It discourages and mostly prevents the use of > porr quality, hand-written solutions to handle it. Even in the rare > cases where the solutions are hand-written, they're typically based on > non-Turing complete regex's. No one takes an arbitrary XML document and > throws it into a JavaScript interpreter. People do this with JSON all > the time, and the language was deliberately designed to make this possible. > That's an interesting premise. I think it has some validity, but there is a major consequence. there is effectively one XML parser for Java, Xerces, whose code is something I'm scared of. There's also, I think, two for Windows, msxml and whatever .net uses. I havent seen their code to fear it. But the result is that there are three parsers to target. find a buffer overflow in MSXML and you have another back door into windows; find some way to do bad things in xerces, and you own most the java and XML -based services on the planet. I've always felt it would be a better place to put in a back door (a custom PI? an obscure bit of XSD?) than a SOAP stack, because even inside something like Axis1, the base servlet pipeline is fairly simple (and as the author of much of that servlet, something I trust :). Nowadays, if I were to back-door a web service, I'd target the WSDL to Java code generation. that's some nasty template stuff that generates java source nobody reads. XmlBeans is probably just as juicy a target, and as that tool chucks away the source after creating the .class files, fairly subtle. you just need to ident some extra-rare XSD pattern to go after. One strength of JSON is that its simplicity is like XMLRPC -the effort of creating a parser is so low that we have a very heterogenous codebase out there -at least until javax.ws.json ships. -Steve (orginal author of the Axis security pages, http://ws.apache.org/axis/java/security.html)
Steve Loughran wrote: > One strength of JSON is that its simplicity is like XMLRPC -the effort You mean the spec is inconsistent and buggy but because it fits on one page it seems like it's simple on first pass? I hear a lot of the bugs in XMLRPC have been fixed, but really it's "simplicity" is chimeric.
On 7/30/07, Jon Hanna <jon@...> wrote: > Steve Loughran wrote: > > One strength of JSON is that its simplicity is like XMLRPC -the effort > > You mean the spec is inconsistent and buggy but because it fits on one > page it seems like it's simple on first pass? > > I hear a lot of the bugs in XMLRPC have been fixed, but really it's > "simplicity" is chimeric. > yep. But it also means you dont need to commit to a SOAP stack vendor, have a toolchain whose whole aim in life is to hide the incoming data, or rely on reverse-generated WSDL and XSD to describe the operations. This doesnt mean I'm a fan of XMLRPC (I'm not), only that I dont consider much of SOAP, other than SOAPFault, to be an improvement. SOAPFault I do like, but only as long as you stay in the XML space, and stop trying to turn it back into a predefined native fault type, which will drop on the floor all the interesting stuff the stack added, like (in Axis), the hostname, a stack trace and any HTTP error codes picked up on the way. -steve
* Elliotte Harold <elharo@...> [2007-07-21 14:52]: > A. Pagaltzis wrote: > > * Elliotte Harold <elharo@...> [2007-07-17 01:00]: > >> You're not supposed to put stuff arbitrary JavaScript into > >> JSON, but people can and do. > > > > Then it’s not JSON anymore and JSON parsers will choke on it. > > JSON is a computation-free subset of Javascript. > > Crackers don't play by the rules. They do not send only > well-formed messages that adhere to the spec. Secure software > has to be ready for absolutely any input, not just input that > follows the spec. Sure. All of the software I’ve written to date will spit stuff back out if it purports to be JSON but contains Javascript code. Because *none* of my code that uses JSON is Javascript. Now how does that fit into your world view? (And if it were JS, my statement would still hold true because I wouldn’t use `eval` anyway.) Let the crackers have at it. They’re not disturbing my sleep. > That XML is so complex that you really need a true parser to > handle it is a feature, not a bug. It discourages and mostly > prevents the use of porr quality, hand-written solutions to > handle it. Right. That’s why we had the billion laughs attack, and why XML parsers can be caused to participate in a DDoS or to violate the privacy of their users if you provide an address for an external DTD. Give me a JSON parser any day. I know, the former is fixed in most parsers. The latter generally has to be manually disabled from client code; most app developers forget to toggle it appropriately. > No one takes an arbitrary XML document and throws it into a > JavaScript interpreter. People do this with JSON all the time, That’s a bug. Such code will fail to reject things that are not JSON. > the language was deliberately designed to make this possible. Did Crockford actually say that somewhere? Citation? And if that were so, why would JSON forbid syntactical variants (such as keys with no quotes around them) that are valid in Javascript? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi, I'm fairly new to all things REST (and Web Services in general), but I do have some questions for you. Namely: - What do you think of the idea of having an interface definition language for REST services ? It seems some people outright reject the idea, while others support it in the form of WADL. It seems to me something like that would be nice to have, allowing for example Yahoo, Google and Microsoft to agree on a unified API for searches, that would be published using the IDL of choice. Client could then switch fairly easily from an implementation to another. Which leads to my second question... - Is there any ongoing attempt at establishing a standard for service registries ? If we were to have that kind of domain-specific standardized services, it would probably be interesting to have registries of interesting service interfaces along with running implementations. Which could be automatically tested for liveness by the registry, possibly using information from the IDL (testing published resources actually are there and respond to advertised verbs, etc). Sorry if this sound very naive, as I said, I'm a newcomer. I did read the previous thread about WADL but it seemed to drift to a discussion of the merits of WADL itself and not IDLs for REST in general. -- Olivier Pernet We are the knights who say echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq'|dc
"Olivier Pernet" <o.pernet@...> writes:
> - What do you think of the idea of having an interface definition
> language for REST services ? It seems some people outright reject the
It'd be nice to have the ideas of HTML forms influence software-to-software
HTTP-based-API design. The idea that if instead of hard-coding the
query-parameters that a resource expects to see GET or POSTed, a previous GET
might have an in-band form that describes such parameters, for the software
to construct. The idea that services – in the same ways that browsers do –
benefit from starting from http://api.flickr.yahoo.com/ and traversing
expected representations, rather than hard-coding URLs...
> example Yahoo, Google and Microsoft to agree on a unified API for
> searches, that would be published using the IDL of choice. Client
Why would they want to do that?
At the same time, one could imagine many of the blog engines of the world
publishing a little fragment that looked something like:
<!-- ... -->
<form API:id="archive-content-search" method="GET" action="./search">
<input type="text" name="q"/><input type="submit" value="Search!"/>
</form>
<!-- ... -->
> - Is there any ongoing attempt at establishing a standard for service
> registries ? If we were to have that kind of domain-specific
> standardized services, it would probably be interesting to have
[...]
ETOO_MUCH_HYPOTHESISING.
> We are the knights who say
> echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq'|dc
Heh heh. :)
--
...jsled
http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
On Aug 3, 2007, at 3:41 AM, Olivier Pernet wrote: > Hi, > > I'm fairly new to all things REST (and Web Services in general), but I > do have some questions for you. Namely: > > - What do you think of the idea of having an interface definition > language for REST services ? It seems some people outright reject the > idea, while others support it in the form of WADL. > It seems to me something like that would be nice to have, allowing for > example Yahoo, Google and Microsoft to agree on a unified API for > searches, that would be published using the IDL of choice. Client > could then switch fairly easily from an implementation to another. > Which leads to my second question... > > http://bitworking.org/news/193/Do-we-need-WADL > - Is there any ongoing attempt at establishing a standard for service > registries ? If we were to have that kind of domain-specific > standardized services, it would probably be interesting to have > registries of interesting service interfaces along with running > implementations. Which could be automatically tested for liveness by > the registry, possibly using information from the IDL (testing > published resources actually are there and respond to advertised > verbs, etc). > > We have one failed approach (with UDDI) already; not sure why we'd need another one. For enterprisey (governance) aspects, some thoughts here: http://www.innoq.com/blog/st/2007/07/26/governance_and_rest.html > Sorry if this sound very naive, as I said, I'm a newcomer. I did read > the previous thread about WADL but it seemed to drift to a discussion > of the merits of WADL itself and not IDLs for REST in general. > -- > Olivier Pernet > > We are the knights who say > echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq'|dc > :-)
On 3 Aug 2007, at 04:42, Josh Sled wrote: > "Olivier Pernet" <o.pernet@gmail.com> writes: >> - What do you think of the idea of having an interface definition >> language for REST services ? It seems some people outright reject the > > It'd be nice to have the ideas of HTML forms influence software-to- > software > HTTP-based-API design. The idea that if instead of hard-coding the > query-parameters that a resource expects to see GET or POSTed, a > previous GET > might have an in-band form that describes such parameters, for the > software > to construct. The idea that services in the same ways that > browsers do > benefit from starting from http://api.flickr.yahoo.com/ and traversing > expected representations, rather than hard-coding URLs... Forms indeed are a key piece that needs to be looked at. I have found it inspiring to think of forms as questions available for the user agent to answer. By ticking a checkbox, entering text in a field, selecting a drop down menu, the user is answering a question explained in english to him, and binding variables to his answer. To make this more automatizable the questions have to be able to be asked in machine readable way that is both easy for humans and machines to understand, that is flexible and decentralised. The semantic web provides the machine readable decentralised vocabulary to describe the world, SPARQL the query language to ask it. To see how I first thought of it, have a quick read at "RESTFul Semantic Web Services" http://blogs.sun.com/bblfish/entry/restful_semantic_web_services That is a first shot. But it seems to be much more RESTful and easier to understand than the SOAP stack. > >> example Yahoo, Google and Microsoft to agree on a unified API for >> searches, that would be published using the IDL of choice. Client > That is called SPARQL. > Why would they want to do that? > > > At the same time, one could imagine many of the blog engines of the > world > publishing a little fragment that looked something like: > > <!-- ... --> > <form API:id="archive-content-search" method="GET" action="./ > search"> > <input type="text" name="q"/><input type="submit" > value="Search!"/> > </form> > <!-- ... --> > > >> - Is there any ongoing attempt at establishing a standard for service >> registries ? If we were to have that kind of domain-specific >> standardized services, it would probably be interesting to have > [...] You no longer need standard registries. With SPARQL and the semantic web you have all you need. Now I am not absolutely sure of that statement to tell you the truth, because I have not used registries that much, so if you have a doubt please put forward a use case of a registry, and I'll try to show how I would do it . > > ETOO_MUCH_HYPOTHESISING. > > >> We are the knights who say >> echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq'|dc > > Heh heh. :) :-)
On 8/3/07, Josh Sled <jsled@asynchronous.org> wrote: > "Olivier Pernet" <o.pernet@gmail.com> writes: > > - What do you think of the idea of having an interface definition > > language for REST services ? It seems some people outright reject the > > It'd be nice to have the ideas of HTML forms influence software-to-software > HTTP-based-API design. The idea that if instead of hard-coding the > query-parameters that a resource expects to see GET or POSTed, a previous GET > might have an in-band form that describes such parameters, for the software > to construct. The idea that services – in the same ways that browsers do – > benefit from starting from http://api.flickr.yahoo.com/ and traversing > expected representations, rather than hard-coding URLs... Now that sound good. It's still an IDL isn't it ? It's just distributed directly by the service provider and is not exportable. But this and WADL are different implementations of the same idea, aren't they? > > example Yahoo, Google and Microsoft to agree on a unified API for > > searches, that would be published using the IDL of choice. Client > > Why would they want to do that? OK, that sound a bit far-fetched. How about makers of blogging software agreeing on such a standard so that it's easier to - switch blogging engines - write software that extracts stuff outs of blogs > At the same time, one could imagine many of the blog engines of the world > publishing a little fragment that looked something like: > > <!-- ... --> > <form API:id="archive-content-search" method="GET" action="./search"> > <input type="text" name="q"/><input type="submit" value="Search!"/> > </form> > <!-- ... --> [...] Thanks for the prompt replies ! -- Olivier Pernet We are the knights who say echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq'|dc
On 8/3/07, Stefan Tilkov <stefan.tilkov@...> wrote: > On Aug 3, 2007, at 3:41 AM, Olivier Pernet wrote: > > > Hi, > > > > I'm fairly new to all things REST (and Web Services in general), but I > > do have some questions for you. Namely: > > > > - What do you think of the idea of having an interface definition > > language for REST services ? It seems some people outright reject the > > idea, while others support it in the form of WADL. > > It seems to me something like that would be nice to have, allowing for > > example Yahoo, Google and Microsoft to agree on a unified API for > > searches, that would be published using the IDL of choice. Client > > could then switch fairly easily from an implementation to another. > > Which leads to my second question... > > > > > http://bitworking.org/news/193/Do-we-need-WADL Well, this makes a case against WADL, based on - experience with WSDL showing that people will want to generate code from it - the fact that current schema description languages are not adequate ...all implementation issues, however important, IMHO. On the other hand, the OpenSearch format definitely is an IDL. It's a domain-specific IDL. So what ? It's a different implementation of the same basic idea. Now it may make a lot more sense to have narrower, domain-specific IDLs like that : it may make capturing semantics easier, and users of a search service are not going to switch to a stockquote service without changing their code anyway. Now comes the question : how does the domain-specific IDLs fit with the idea of having forms as IDLs ? As I see it, forms as IDLs are just a layer of indirection, while an full fledged IDL document may give more infomation. But maybe more information isn't needed. > > - Is there any ongoing attempt at establishing a standard for service > > registries ? If we were to have that kind of domain-specific > > standardized services, it would probably be interesting to have > > registries of interesting service interfaces along with running > > implementations. Which could be automatically tested for liveness by > > the registry, possibly using information from the IDL (testing > > published resources actually are there and respond to advertised > > verbs, etc). > > > > > We have one failed approach (with UDDI) already; not sure why we'd > need another one. > For enterprisey (governance) aspects, some thoughts here: > http://www.innoq.com/blog/st/2007/07/26/governance_and_rest.html Interesting. But I was more thinking about the public Web here, where having a starting point to get to service implementations could be more valuable that within a single entreprise (where you now more or less what is up and running). -- Olivier Pernet We are the knights who say echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq'|dc
"Olivier Pernet" <o.pernet@...> writes:
> On 8/3/07, Josh Sled <jsled@...> wrote:
>> "Olivier Pernet" <o.pernet@...> writes:
>> > - What do you think of the idea of having an interface definition
>> > language for REST services ? It seems some people outright reject the
>>
>> It'd be nice to have the ideas of HTML forms influence software-to-software
>> HTTP-based-API design. The idea that if instead of hard-coding the
>> query-parameters that a resource expects to see GET or POSTed, a previous GET
>> might have an in-band form that describes such parameters, for the software
>> to construct. The idea that services – in the same ways that browsers do –
>> benefit from starting from http://api.flickr.yahoo.com/ and traversing
>> expected representations, rather than hard-coding URLs...
>
> Now that sound good. It's still an IDL isn't it ? It's just
> distributed directly by the service provider and is not exportable.
> But this and WADL are different implementations of the same idea,
> aren't they?
Somewhat. AIUI, WADL is still a development-/compile-time description. Once
you've written software to the described API, you're tightly coupled to that
API. The other approach is a run-time, dynamic one. In particular, it moves
the application state transitions into hypermedia, which is one of the key
REST constraints. I still think there's room for an "IDL", but it's
"distributed" through the resources and their representation formats, rather
than a single up-front service-focused description document.
--
...jsled
http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
Hi Henry, On Aug 3, 2007, at 1:24 AM, Story Henry wrote: > To see how I first thought of it, have a quick read at "RESTFul > Semantic Web > Services" > > http://blogs.sun.com/bblfish/entry/restful_semantic_web_services > > That is a first shot. But it seems to be much more RESTful and easier > to understand than the SOAP stack. Thanks! I've added it to my REST Service Descriptions page: http://microformats.org/wiki/rest/description#Proposals.2FExamples Best, -- Ernie P.
Let's say we PUT some document to /articles/1. The document is XML, and has fields for a title, body, date, etc. The title is blank, but our app doesn't allow that, so the change isn't made. What status code should I use for that? My two best guesses are 403 and 409. 403: The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated. If the request method was not HEAD and the server wishes to make public why the request has not been fulfilled, it SHOULD describe the reason for the refusal in the entity. If the server does not wish to make this information available to the client, the status code 404 (Not Found) can be used instead. That seems decent...no amount of repeating the same request will do any good, nor will some other authorization. There's some problem with the request itself. I can display the errors that prevented the request from completing successfully. 409: The request could not be completed due to a conflict with the current state of the resource. This code is only allowed in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. The response body SHOULD include enough information for the user to recognize the source of the conflict. Ideally, the response entity would include enough information for the user or user agent to fix the problem; however, that might not be possible and is not required. Conflicts are most likely to occur in response to a PUT request. For example, if versioning were being used and the entity being PUT included changes to a resource which conflict with those made by an earlier (third-party) request, the server might use the 409 response to indicate that it can't complete the request. In this case, the response entity would likely contain a list of the differences between the two versions in a format defined by the response Content-Type. This is another possible code - there's a conflict between what the user submitted and what is an acceptable state. Again I can and should return some error information. I'd really appreciate some insight into which code is the best to use. Of course if there's one more suitable than 403 and 409 I'd like to know about it. Thanks, Pat
Pat Maddox wrote: > ... > I'd really appreciate some insight into which code is the best to use. > Of course if there's one more suitable than 403 and 409 I'd like to > know about it. > ... Both 403 and 409 will work; you could also consider 422... In the end, I don't think it will matter unless you expect generic clients to do something with your server... Best regards, Julian
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Pat" == Pat Maddox <pergesu@...> writes:
Pat> I'd really appreciate some insight into which code is the
Pat> best to use. Of course if there's one more suitable than 403
Pat> and 409 I'd like to know about it.
The issue is: what to do when the client doesn't obey the
precondition.
For what it is worth, I'm returning 403 in such a case.
- --
All the best,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGs4kPIyuuaiRyjTYRAs6JAJ435PP9qyyqwSILjm4abPY/fNYTLQCfQZwH
p4I5+eCIN57saxM18hmYcY4=
=uJ5k
-----END PGP SIGNATURE-----
On Aug 3, 2007, at 2:38 PM, Pat Maddox wrote:
> .
> Let's say we PUT some document to /articles/1. The document is XML,
> and has fields for a title, body, date, etc. The title is blank, but
> our app doesn't allow that, so the change isn't made. What status
> code should I use for that? My two best guesses are 403 and 409.
>
What about 400? In the O'Reilly book, 'RESTful Web Services', they
state for 400: "It's commonly used when the client submits a
representation along with a PUT or POST request, and the
representation is in the right formate, but it doesn't make sense."
422 appears to be a WEBDAV code extension -- I don't find it in the
standard sources. If you want to use it, 422 ("Unprocessable
Entity") is not a bad idea.
I tend to think of 409 as something to be used when there is a
problem between your request and the current resource state (such as
trying to create a resource that already exists).
-Kathy Van Stone
kvs@...
On Fri, Aug 03, 2007 at 11:38:57AM -0700, Pat Maddox wrote: > 409: The request could not be completed due to a conflict with the > current state of the resource. This code is only allowed in situations > where it is expected that the user might be able to resolve the > conflict and resubmit the request. The response body SHOULD include > enough > information for the user to recognize the source of the conflict. > Ideally, the response entity would include enough information for the > user or user agent to fix the problem; however, that might not be > possible and is not required. > Conflicts are most likely to occur in response to a PUT request. For > example, if versioning were being used and the entity being PUT > included changes to a resource which conflict with those made by an > earlier (third-party) request, the server might use the 409 response > to indicate that it can't complete the request. In this case, the > response entity would likely contain a list of the differences between > the two versions in a format defined by the response Content-Type. > > This is another possible code - there's a conflict between what the > user submitted and what is an acceptable state. Again I can and > should return some error information. I think your're stretching the meaning of "conflict". Doesn't sound like a good fit to me. -- Paul Winkler http://www.slinkp.com
Kathryn Van Stone wrote:
>
>
>
> On Aug 3, 2007, at 2:38 PM, Pat Maddox wrote:
>> .
>> Let's say we PUT some document to /articles/1. The document is XML,
>> and has fields for a title, body, date, etc. The title is blank, but
>> our app doesn't allow that, so the change isn't made. What status
>> code should I use for that? My two best guesses are 403 and 409.
>>
>
> What about 400? In the O'Reilly book, 'RESTful Web Services', they
> state for 400: "It's commonly used when the client submits a
> representation along with a PUT or POST request, and the representation
> is in the right formate, but it doesn't make sense."
>
> 422 appears to be a WEBDAV code extension -- I don't find it in the
> standard sources. If you want to use it, 422 ("Unprocessable Entity")
> is not a bad idea.
> ...
422 is in the IANA http status code registry, so it's not really
different from any other status code, except that it's defined in a
different document.
Best regards, Julian
On 8/3/07, Olivier Pernet <o.pernet@...> wrote: > Hi, > > I'm fairly new to all things REST (and Web Services in general), but I > do have some questions for you. Namely: > > - What do you think of the idea of having an interface definition > language for REST services ? It seems some people outright reject the > idea, while others support it in the form of WADL. > It seems to me something like that would be nice to have, allowing for > example Yahoo, Google and Microsoft to agree on a unified API for > searches, that would be published using the IDL of choice. Client > could then switch fairly easily from an implementation to another. > Which leads to my second question... Olivier, 1. signature!=interface. The original definition by D.L. Parnas defined interface as signature+semantics. In the absence of a formal language to define the semantics of a communication, all you are left with is things to try and stabilise the signature (WSDL) or to describe some of the communications protocol. 2. WSDL encouraged an explosion of service interfaces, which, as we all know, was a mistake. Why not not just assume that everyone and everything will adopt APP and built your services to integrate with that? > > - Is there any ongoing attempt at establishing a standard for service > registries ? If we were to have that kind of domain-specific > standardized services, it would probably be interesting to have > registries of interesting service interfaces along with running > implementations. Which could be automatically tested for liveness by > the registry, possibly using information from the IDL (testing > published resources actually are there and respond to advertised > verbs, etc). Oh olivier, we dont want to go there again. > Sorry if this sound very naive, as I said, I'm a newcomer. I did read > the previous thread about WADL but it seemed to drift to a discussion > of the merits of WADL itself and not IDLs for REST in general. > -- I think you need consensus on what constitutes an interface before you can worry about an IDL.
Most(all?) of the REST theory I've read, and maybe all the examples, discusses altering the state of one resource when hitting a URL. But I keep bumping into cases where it looks like I need to alter the state on more than one resource in one hit. Here is a simplified example: Suppose there are resources where a particular attribute must be unique across the set of all those resources. If A has color red, then B cannot have color red. Now imagine it needs to be possible to swap the state of two resources with respect to this attribute. If I start with A.color=red and B.color=green I need to end up with A.color=green and B.color=red In order to preserve the requirement of uniqueness, I want to do this in a single hit. To me this looks like I have a single URL, which I pass state for both A and B to, and it changes the state of both. This doesn't match the way I read the REST theory, but I can't come up with a way does. Any thoughts on how to make this closer to theory? Or is this one of those areas where you just have to bend the rules? Kevin
k_mccarthy wrote: > To me this looks like I have a single URL, which I pass state > for both A and B to, and it changes the state of both. You got it. > This doesn't match the way I read the REST theory, but I > can't come up with a way does. Yeah it does, it is just not obvious at first. Here's the obvious part, right: 1.) http://example.com/colors/a/ 2.) http://example.com/colors/b/ From that it's hard to see how to do what you need. But if you add the following URL to your REST interface, it suddenly becomes clear (at least I hope it does. :) 3.) http://example.com/color-combos/a-and-b/ Note that any state that you need can simply be modeled as a URL that represents that state. Just make sure to think in terms of nouns, not verbs (i.e. '/color-combos/a-and-b/' not '/swap-a-and-b'). But note that the use of URL #3 does not preclude the use of #1 or #2, both of which are almost certainly useful in other contexts. Does this help? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
On 8/6/07, k_mccarthy <k_mccarthy@...> wrote: > Most(all?) of the REST theory I've read, and maybe all the examples, > discusses altering the state of one resource when hitting a URL. But I > keep bumping into cases where it looks like I need to alter the state > on more than one resource in one hit. In my opinion, sometimes agreed to by others on this list and sometimes not, what the server does behind the scenes is the server's business. I can think of many many cases where a change to one resource will result in changes to other resources, e.g. I buy something and it changes the inventory resource as well as my order and kicks off a shipping process and charges my credit card, etc. I don't think it violates REST in any way.
[ Attachment content not displayed ]
On Aug 6, 2007, at 7:44 AM, k_mccarthy wrote: > Most(all?) of the REST theory I've read, and maybe all the examples, > discusses altering the state of one resource when hitting a URL. But I > keep bumping into cases where it looks like I need to alter the state > on more than one resource in one hit. Umm, all REST theory that I know about prevents you from knowing the extent to which one resource state is overlapped with other resources. In other words, the normal case is for many resources to share the same or overlapping state, just as there can be many different views of a clock and each clock can be separated into subresources (hours, minutes, seconds, etc.) that have an overlapped state with the clock. ....Roy
[ Attachment content not displayed ]
* mike amundsen <mamund@...> [2007-08-06 23:40]: > is it overkill to have the browser POST/PUT to /.../a/ and then > have the server send a 301 with the location of /.../b/? No, it’s not overkill, it’s just wrong. You’re telling the client that it should repeat the same POST/PUT against the resource at the redirected URI. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
<mikeschinkel@...> wrote: > > 3.) http://example.com/color-combos/a-and-b/ I get this, but then I don't. I can see how this covers the case I gave in my example. But there are two possible generalizations which I can't quite make the leap to: A) There are N possible combinations of things to set color-combo state on. So I guess you need to generate that part of the URL. B) What if there are multiple attributes being swapped? color, height, location, etc.? Am I going to generate that part of the URL too? http://example.com/color-height-location-combos/a-and-b-and-c At this point, that seems more confusing than simple. > > Note that any state that you need can simply be modeled as a URL that > represents that state. I thought what was in the URL was resources, and you would interact with the state of those resources. Which is what your example URL (3) looks like it is doing. "color-combos" is a resource, and "a-and-b" is sub-resource under that. Kevin
--- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> wrote: > > Umm, all REST theory that I know about prevents you from knowing > the extent to which one resource state is overlapped with other > resources. In other words, the normal case is for many resources > to share the same or overlapping state I don't quite get this. Do you make a distinction between two resources whose state happens to have the same value, and two resources who share the same state? If I have two clocks, A and B, I can set them to have the same time. Maybe A is simulating traveling at the speed of light, so gradually A and B no longer have the same time. So initially the state happened to have the same value. Or I can synchronize A and B to the same time. And by this I want to mean that if I reset B, A will be reset too. This sounds like shared state. But then I would think that there is a third resource, C. I have set C to a particular time, and set A and B to refer to C. Which means there is no shared state at all. Or maybe I am completely missing the point. Kevin
k_mccarthy wrote: > Or I can synchronize A and B to the same time. And by this I want to > mean that if I reset B, A will be reset too. This sounds like shared > state. Yeah, but there's no feature in REST that will let you know A and B will synchronise. That could be communicated over REST to something or someone that understood the communication, but it isn't in REST.
--- In rest-discuss@yahoogroups.com, Jon Hanna <jon@...> wrote: > > > Yeah, but there's no feature in REST that will let you know A and B will > synchronise. That could be communicated over REST to something or > someone that understood the communication, but it isn't in REST. > Well, maybe I have a synchronized clocks resource? /clocks/synchronized Which gets me back to my original problem, what is the best way to express that I want both A and B synched /clocks/synchronized/A,B Do a POST to /clocks/synchronized and pass in as data ids for A and B? A POST because I am creating a new synchronized clock Kevin
Hello, k_mccarthy wrote: > > > Most(all?) of the REST theory I've read, and maybe all the examples, > discusses altering the state of one resource when hitting a URL. But I > keep bumping into cases where it looks like I need to alter the state > on more than one resource in one hit. > > Here is a simplified example: > Suppose there are resources where a particular attribute must be > unique across the set of all those resources. If A has color red, then > B cannot have color red. Now imagine it needs to be possible to swap > the state of two resources with respect to this attribute. If I start > with > > A.color=red and B.color=green > > I need to end up with > > A.color=green and B.color=red > > In order to preserve the requirement of uniqueness, I want to do this > in a single hit. > > To me this looks like I have a single URL, which I pass state for both > A and B to, and it changes the state of both. > > This doesn't match the way I read the REST theory, but I can't come up > with a way does. Any thoughts on how to make this closer to theory? Or > is this one of those areas where you just have to bend the rules? I think it depends on whether-or-not resources A and B share state with another resource -- such as a database that supports transactions -- and on whether you want to show or hide that fact. Without building upon the rules (I'm not sure I'd call it "bending the rules"), you can't know. As you say, if you build a 3rd resource that is able to alter the state of A and B at once, you essentially create another view of the database resource that shares A and B colour states. This is not too difficult to implement if A, B and A-and-B are practically on the same server, sharing the same database (or similar repository). If you can't make this assumption (A and B sharing their state with a resource that supports transactions), the problem becomes more difficult and requires a consensus algorithm. This document is probably relevant in this case (Mark posted it on this list a few weeks ago): http://betathoughts.blogspot.com/2007/06/brief-history-of-consensus-2pc-and.html Best wishes, Bruno.
k_mccarthy wrote:
> I get this, but then I don't. I can see how this covers the
> case I gave in my example. But there are two possible
> generalizations which I can't quite make the leap to:
>
> A) There are N possible combinations of things to set
> color-combo state on. So I guess you need to generate that
> part of the URL.
Yes, using URI template syntax:
http://example.com/color-combos/{color1}-and-{color2}/
> B) What if there are multiple attributes being swapped?
> color, height, location, etc.? Am I going to generate that
> part of the URL too?
> http://example.com/color-height-location-combos/a-and-b-and-c
Sure, if you need that. But I think maybe you should look at it like this:
http://example.com/{item1}-and-{item2}/
Where you upload the entire item1 and item2 vs. just their colors. And if
you need more:
http://example.com/{item1}-and-{item2}-and-{item3}/
Though I'm not sure what the swapping logic would be here.
Alternately you could look at using transactions like covered in the RESTful
Web Services book.
> > Note that any state that you need can simply be modeled as
> a URL that
> > represents that state.
>
> I thought what was in the URL was resources, and you would
> interact with the state of those resources. Which is what
> your example URL (3) looks like it is doing. "color-combos"
> is a resource, and "a-and-b"
> is sub-resource under that.
Not sure of what you are asking, but I can say that there is no
"sub-resource", there is only one resource per one URL. The form the URL
takes is up to you for your own convenience in modeling your system. You
could just as RESTfully have URLs like the following although good luck
trying to comprehend it all:
http://example.com/000001
http://example.com/000002
http://example.com/000003
...
FYI the concept of a resource is just an abstraction and trying to
explicitly define what the term resource actually means, therein lies
madness. '-)
HTH
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
[ Attachment content not displayed ]
Karen wrote: > > FYI the concept of a resource is just an abstraction and trying to > > explicitly define what the term resource actually means, therein lies > > madness. '-) > > Oh, not at all. A resource is a thingy. Perfectly simple. GET the > thingy, PUT the changed thingy, DELETE the thingy, POST a new > thingy. The Internet is a series of thingies. heh. [1] -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us [1] http://lists.w3.org/Archives/Public/www-ws-arch/2003Feb/thread.html#msg157
On Aug 7, 2007, at 6:34 AM, k_mccarthy wrote: > --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> > wrote: > > > > Umm, all REST theory that I know about prevents you from knowing > > the extent to which one resource state is overlapped with other > > resources. In other words, the normal case is for many resources > > to share the same or overlapping state > > I don't quite get this. Do you make a distinction between two > resources whose state happens to have the same value, and two > resources who share the same state? Yes, but not in that example. > If I have two clocks, A and B, I can set them to have the same time. > Maybe A is simulating traveling at the speed of light, so gradually A > and B no longer have the same time. So initially the state happened to > have the same value. > > Or I can synchronize A and B to the same time. And by this I want to > mean that if I reset B, A will be reset too. This sounds like shared > state. But then I would think that there is a third resource, C. I > have set C to a particular time, and set A and B to refer to C. Which > means there is no shared state at all. > > Or maybe I am completely missing the point. I think you missed the point. You can have a single clock resource. A single "minute-hand" resource. A single "minutes-in-decimal" resource. A single "hour-hand" resource. ... The state that is being shared is the common time model. If you change the "minute-hand" resource, it will have a side-effect on the "minutes-in-decimal" resource state and the "clock" state. If you change the clock state, it will probably result in changes to all the other resource states. This is all normal and expected by the REST model, even though REST only talks about one resource at a time. The interconnected states are not visible to the client because it doesn't need to know they exist (and not knowing is the best way to avoid coupling between implementations). ....Roy
Steve Loughran > 2. WSDL encouraged an explosion of service interfaces, which, > as we all know, was a mistake. Agreed, as implemented. > Why not not just assume that > everyone and everything will adopt APP and built your > services to integrate with that? Doesn't that just push the problem elsewhere? Now we have atom being used for lots of different things with the confusion moved up the stack. Seems like we are trading one problem for another. -- -Mike Schinkel organizer@... http://atlanta-web.org 404-276-1276 (cell) P.S. Also, accorrding to Joe Gregorio, it's "AtomPub" not "APP" '-)
"Mike Schinkel" <mikeschinkel@...> writes: > Doesn't that just push the problem elsewhere? Now we have atom being used > for lots of different things with the confusion moved up the stack. Seems > like we are trading one problem for another. But you have added a simple constraint. "use APP" -- Nic Ferrier http://prooveme.com - easy, simple, certificated OpenID
Nic Ferrier wrote: > > Doesn't that just push the problem elsewhere? Now we have > atom being > > used for lots of different things with the confusion moved up the > > stack. Seems like we are trading one problem for another. > > But you have added a simple constraint. > > "use APP" I don't follow your point... -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
* Mike Schinkel <mikeschinkel@...> [2007-08-08 03:30]: > Steve Loughran > > Why not not just assume that everyone and everything will > > adopt APP and built your services to integrate with that? > > Doesn't that just push the problem elsewhere? Now we have atom > being used for lots of different things with the confusion > moved up the stack. Seems like we are trading one problem for > another. If Atompub is a close enough match to your problem domain, then that part of the confusion gets resolved; the *rest* is pushed up the stack, but it’s a *smaller* confusion than what you started with. You no longer need to define what your operations are (they are those that Atompub defines); only what the things mean that you are operating on. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi Olivier,
* Olivier Pernet <o.pernet@...> [2007-08-03 17:00]:
> What do you think of the idea of having an interface definition
> language for REST services ? It seems some people outright
> reject the idea, while others support it in the form of WADL.
I think there is a conflation of ideas here. My most widely cited
weblog entry (that started life as a post on this list) goes into
this:
http://plasmasturm.org/log/460/
Basically, “these are not your father’s interface descriptions.”
I think WADL is mistaken, but I don’t think the concept of
generating code from a description needs to be abandoned. I just
think it needs to describe different things than an IDL as we
know it describes.
Specifically, I think what we need is a description language that
could be implemented as a vocabulary to be embedded into a
Relax NG or Schematron schema, which identifies which parts of a
document conforming to that schema are links, or forms, and
specifies what semantics this form or link implies; eg. the
Relax NG grammar for Atompub in XML syntax form contains, among
other things, this:
<element name="app:collection">
<ref name="appCommonAttributes"/>
<attribute name="href">
<ref name="atomURI"/>
</attribute>
<interleave>
<ref name="atomTitle"/>
<zeroOrMore>
<ref name="appAccept"/>
</zeroOrMore>
<zeroOrMore>
<ref name="appCategories"/>
</zeroOrMore>
<zeroOrMore>
<ref name="extensionSansTitleElement"/>
</zeroOrMore>
</interleave>
</element>
We could turn this grammar into a description language for a
RESTful system by saying something like the following, where I’m
going to zoom on the `href` attribute part:
<attribute name="href">
<ref name="atomURI"/>
<ridl:link>
<ridl:request>
<ridl:method name="GET"/>
<ridl:response content-type="application/atom+xml"/>
</ridl:method>
<ridl:request>
<ridl:method name="POST">
<ridl:content-type name="application/atom+xml;entry"/>
<ridl:response content-type="application/atom+xml;entry">
</ridl:request>
<ridl:request>
<ridl:method name="POST">
<ridl:content-type name="*/*"/>
<ridl:response content-type="application/atom+xml;entry">
</ridl:request>
</ridl:link>
</attribute>
The “RIDL” vocabulary I used here is highly incomplete, of
course, and the nesting might come out differently also, but it’s
a sketch that demonstrates the sort of approach I envision. There
would be a corresponding `ridl:form` element, and a lot more
elements specifying the pre- and post-conditions each particular
request/response cycle must fulfill.
Assuming we have all these facilities, then you could take
RIDL-annotated `atomsvc.rng` and `atom.rng` grammars and run them
through a code generator that will construct out a library which
presents an API based on the semantics of Atompub Service and
Category Documents and Atom Feed Documents.
But which knows nothing about the URIs your Atompub service uses.
Nor does it know anything about Atompub at all, of course.
The library just follows links and submits forms as you make
calls against its API. Of course you would then have to couple it
with some hand-written code to fill in the gaps, namely why and
when to make the requests that the annotation specifies as
acceptable – the knowledge of what the levers in an Atompub
service *mean*, rather than just what the levers are.
What we have here is much like WADL, but rather than being a
stand-alone IDL telling you which URIs to make what requests
against, it is embedded in a grammar such that it explains where
to find the relevant URIs in a representation that will be
returned by the service at run time, *and then* what requests to
make against *those*.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote: > If Atompub is a close enough match to your problem domain, > then that part of the confusion gets resolved; the *rest* is > pushed up the stack, but it's a *smaller* confusion than what > you started with. You no longer need to define what your > operations are (they are those that Atompub defines); only > what the things mean that you are operating on. Oh, I concur that AtomPub is useful in many contexts but, as you imply, not all. Advocating it to be only solution needed is shortsighted, IMO. Don't you agree? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
* Mike Schinkel <mikeschinkel@...> [2007-08-10 20:05]: > A. Pagaltzis wrote: > > If Atompub is a close enough match to your problem domain, > > then that part of the confusion gets resolved; the *rest* is > > pushed up the stack, but it's a *smaller* confusion than what > > you started with. You no longer need to define what your > > operations are (they are those that Atompub defines); only > > what the things mean that you are operating on. > > Oh, I concur that AtomPub is useful in many contexts but, as > you imply, not all. Advocating it to be only solution needed > is shortsighted, IMO. Don't you agree? With a caveat; namely that I think Atompub is applicable to many more contexts than people might realise. So I think it worthwhile to tell them to try it first before they do anything else. But yeah, I’m not a dogmatist. If they’ve given it thought and found it really *is* a bad fit, then absolutely, insisting to shoehorn their problem into Atompub would be silly. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> With a caveat; namely that I think Atompub is applicable to > many more contexts than people might realise. So I think it > worthwhile to tell them to try it first before they do anything else. And in the case where it is a good fit, there is then a need to again define the interacts for each use-case (beyond the nomimal case of "publish.") One thing I dislike about AtomPub, and I know I'm probably in the minority here, is it requires special tools to interact with vs. just having a browser, and it requires web services be created in addition to web pages. I'd really advocate for exposing webservices as a default case via semantic HTML instead of AtomPub though AtomPub could be the default second case in "the world according to Mike." I thinking it should be a best practice to that when web developers build a website it should also double as a REST-based web service. That would take web services out of the realm of "mystical" and makes it possible for less skilled people to interact with them and to see the value in them. And I was glad to see that the book RESTful Web Services advocated that approach too (though not as strongly as I would have liked.) JMTCW anyway. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
* Mike Schinkel <mikeschinkel@...> [2007-08-11 05:25]: > > With a caveat; namely that I think Atompub is applicable to > > many more contexts than people might realise. So I think it > > worthwhile to tell them to try it first before they do > > anything else. > > And in the case where it is a good fit, there is then a need to > again define the interacts for each use-case (beyond the > nomimal case of "publish.") Sure, but they’ll have to define those anyway. Atompub just takes care of predefining the lower-level machinery so it doesn’t have to be reinvented over and over by everyone who has similar needs. > One thing I dislike about AtomPub, and I know I'm probably in > the minority here, is it requires special tools to interact > with vs. just having a browser, Wait, first you’re telling me you don’t like one-size-fits-all advocacy for Atompub, then you tell me we should just shoehorn all apps into the browser? :-) Me, I hope that Atompub uptake is strong enough that we’ll see browsers expand to support everything required that one day we shall, in fact, be able to interact with many Atompub services with just a browser. (In fact, with XMLHttpRequest you already can. Several people are working on such Atompub clients.) I don’t mean just Atompub support here, btw; I mean general support for more of HTTP (like methods other than GET and POST – what a concept!), and markup languages like HTML5 that allow users to actually exploit this expanded support. > it requires web services be created in addition to web pages. > > I'd really advocate for exposing webservices as a default case > via semantic HTML instead of AtomPub How is that different? A WordPress blog has a public face and an admin area. Isn’t it really kind of incidental what kind of content types get exchanged on which half of the service? The important point is that no approach will remove this public-vs-editable segregation for most apps. And semantic HTML, uhm, good luck repairing HTML-as-she-is-spoke sufficiently that it becomes machine-consumable, in a reasonable timeframe. Not that I wouldn’t like that myself, but, while we may all gaze at the stars, we’re still down here in this gutter. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi Mike, On Aug 10, 2007, at 8:24 PM, Mike Schinkel wrote: > One thing I dislike about AtomPub, and I know I'm probably in the > minority > here, is it requires special tools to interact with vs. just having a > browser, and it requires web services be created in addition to web > pages. > I'd really advocate for exposing webservices as a default case via > semantic > HTML instead of AtomPub though AtomPub could be the default second > case in > "the world according to Mike." I would love to see someone define/build a hAtom version of AtomPub, that worked in standard web browsers: http://microformats.org/wiki/hatom Any takers? :-) -- Ernie P.
A. Pagaltzis wrote: > Sure, but they'll have to define those anyway. Atompub just > takes care of predefining the lower-level machinery so it > doesn't have to be reinvented over and over by everyone who > has similar needs. Of course, but there are still many things in the use-case domain AtomPub doesn't address so we end up having to deal with those. Hence back to the same problem albeit higher up the stack. > Wait, first you're telling me you don't like > one-size-fits-all advocacy for Atompub, then you tell me we > should just shoehorn all apps into the browser? :-) You are putting words in my mouth, as usual. :) I didn't say that I didn't like the one-size-fits-all aspect, that aspect I do like; the uniform interface. What I was tryng to say is there will still be a need for use-case specifics and without some way to identify and codify those we'll end up with the same problem, albeit higher up the stack (Is there an echo in here? Am I repeating myself? :) > Me, I hope that Atompub uptake is strong enough that we'll > see browsers expand to support everything required that one > day we shall, in fact, be able to interact with many Atompub > services with just a browser. I have mixed feelings about that. Yes, I assume I would like browser support but frankly I find the browser support in IE7 of RSS disconcerting. When I go there I expect to see the RSS, not the RSS view although I can't exactly say why yet. It always frustrates me when I click a link on a Google search result page and realize that it was an RSS feed. If AtomPub proliferates for general purpose use, that could even get a lot worse. But what I do know is I strongly believe there should be only one primary format for the web, HTML. The fact is has stayed the primary interface for years from a user perspective is I believe one of the reasons the web has been so successful. Throw lots of complexity into the mix and it will just muddy everything. > (In fact, with XMLHttpRequest you already can.) Actually, that's not true. Browsers don't natively display things using XMLHttpRequest, it is a capability that needs to be programmed. By your logic then Java applets and in some browsers ActiveX controls can all be interacted with via a browser. Tell me, how many grandmas are interacting with XMLHttpRequest? > I don't mean just Atompub support here, btw; I mean general > support for more of HTTP (like methods other than GET and > POST - what a concept!), and markup languages like HTML5 that > allow users to actually exploit this expanded support. > That I concur with. > > it requires web services be created in addition to web pages. > > > > I'd really advocate for exposing webservices as a default case via > > semantic HTML instead of AtomPub > > How is that different? A WordPress blog has a public face and > an admin area. Isn't it really kind of incidental what kind > of content types get exchanged on which half of the service? The differences are: -- the requirement to develop seperately for multiple content types. -- the high liklihood that the two efforts will result in output lacking 100% fidelity. -- the high liklihood that the web service won't be developed even though the web site is. -- that tools available for HTML, including browsers, will never in their entirety be available for AtomPub. -- the fact that the default interaction for users on the web is with content that behaves like HTML Having it be a best practice where building a website means building a web service will ensure many more web services actually get built. And we can see significant uptake if we just get CMS developers to consider this a best practice. I'm currently working in Drupal and I plan to look at how Drupal can be made to offer web services by default. The only way it is viable in my world view is if the CMS that is generating the content type generates them with 100% fidelity without user/admin involvement; i.e. the content in HTML is exactly the same as in AtomPub. For example, TurboGears does it that way. And even then, it doesn't allows someone to navigate the web service with a browser and be able to see w/o programming the data complete with links they can right-click to retrieve said information. > The important point is that no approach will remove this > public-vs-editable segregation for most apps. I don't understand your terminology. > And semantic HTML, uhm, good luck repairing > HTML-as-she-is-spoke sufficiently that it becomes > machine-consumable, in a reasonable timeframe. > wouldn't like that myself, but, while we may all gaze at the > stars, we're still down here in this gutter. You are looking at it wrong. We don't need to get everyone to convert all HTML content as we would need to in order to be able to make a clean slate HTML5 possible, we just need to have it evolve as a best practice with software like WordPress and Drupal leading the way. It's a win for those websites and CMS that implement it. We don't need 100% of the web for this to be useful. Each site that does it will be useful in its own right. Clearly the same is true for AtomPub; we don't have lots of existing AtomPub to convert either. That said, you mention that AtomPub handles things for you so you don't have to. I've read the AtomPub spec, although I don't know how awake I was reading it considering how long it is, and I didn't see anything earth shattering. I see AtomPub as a very valuable albeit specialty protocol/format. It seems you see it for general purpose use. Can you tell me specifically what it is about AtomPub that causes you to value its use outside the nominal case of publishing? Maybe it is a panacea and I just don't yet see it. Here's your chance, convert me. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org - http://t.oolicio.us
--- In rest-discuss@yahoogroups.com, Ernest Prabhakar <ernest.prabhakar@...> wrote: > > I would love to see someone define/build a hAtom version of AtomPub, > that worked in standard web browsers: > > http://microformats.org/wiki/hatom > > Any takers? :-) I'm wondering if hAtom would be the simplest thing that could possibly work. Do we really need to burden the client with the ceremonial stuffing of the envelope? Start by posing the title and content. An interesting feature of HTTP that's often ignored in favor of envelopes, is that you can use named parameters. If you can determine the blog's posting URL, which is fairly easy to do, you've covered the basics for WordPress, Blogger and MovableType. I think that's a good base to start with. Add authentication -- we're doing this from the browser, after all -- and you also get the author metadata and access control. In response, the server will redirect you to a URL where you can update, publish and delete the post. Now this being the Browsable Web, we'll have to do with just two verbs. I think we already have enough commonality here to extract an interface that already works for a lot of people to handle all the common cases. That would be the microformat way of doing things, even if we don't involve hAtom directly. -- Assaf Arkin http://labnotes.org > > -- Ernie P. >
[ Attachment content not displayed ]
Nick Gall wrote: > I am moving this permathread > <http://tech.groups.yahoo.com/group/service-orientated-architecture/message/8641?var=1&l=1> > from the SOA Yahoo Group > <http://tech.groups.yahoo.com/group/service-orientated-architecture/> to > the REST-discuss group. > > In essence, Gregg is arguing that the MODBUS > <http://en.wikipedia.org/wiki/Modbus> protocol (see this MODBUS tutorial > <http://www.lammertbies.nl/comm/info/modbus.html>) is RESTful. See the > permathread for more context. My specific argument was about it being uniform. The fact that the definition of REST if defined by other less concrete terms such as Generality and Hypermedia is a separate issue. Gregg Wonderly
* Mike Schinkel <mikeschinkel@...> [2007-08-11 21:00]:
> Of course, but there are still many things in the use-case
> domain AtomPub doesn't address so we end up having to deal with
> those. Hence back to the same problem albeit higher up the
> stack.
No, not the *same* problem – just the *application-specific* part
of the problem. The common parts of the problem are factored out
to Atompub.
But that’s a given; Atompub is infrastructure, just like HTTP
itself. The fact that Atompub alone is not enough to model the
full semantics of the application is no more to the point than
the fact that HTTP isn’t either. Yet I don’t see people frowning
and saying “well HTTP doesn’t really solve my problem so what’s
the point – let’s make a new TCP-based wire protocol.”
> > > One thing I dislike about AtomPub, and I know I'm probably
> > > in the minority here, is it requires special tools to
> > > interact with vs. just having a browser,
>
> > Wait, first you're telling me you don't like
> > one-size-fits-all advocacy for Atompub, then you tell me we
> > should just shoehorn all apps into the browser? :-)
>
> You are putting words in my mouth, as usual. :)
>
> I didn't say that I didn't like the one-size-fits-all aspect,
> that aspect I do like; the uniform interface. What I was tryng
> to say is there will still be a need for use-case specifics and
> without some way to identify and codify those we'll end up with
> the same problem, albeit higher up the stack (Is there an echo
> in here? Am I repeating myself? :)
No, you’re not. I see two different statements. If the latter is
what you meant by the first, then the first one was not explicit
enough for me to understand it clearly.
Considering the latter argument, though, I’m not sure how “just
having a browser” solves the problem that Atompub purportedly
does not. Is it because HTML bundles the app chrome alongside the
data, which Atompub does not?
If so, I don’t see this as a strong objection. I assume we will
see technologies other than HTML (eg. XForms) or complementary
to HTML (eg. you an HTML page generated by an XSL transform
referred to from an xml-stylesheet PI in the Atompub service
document – assuming browsers had support for PUT and DELETE,
and/or your server knew how to tunnel them through POST for
legacy browser).
> > Me, I hope that Atompub uptake is strong enough that we'll
> > see browsers expand to support everything required that one
> > day we shall, in fact, be able to interact with many Atompub
> > services with just a browser.
>
> I have mixed feelings about that. Yes, I assume I would like
> browser support but frankly I find the browser support in IE7
> of RSS disconcerting.
That sounds to me like “it doesn’t work the way it always used to
and that freaks me out.” There are developments that weird me out
too, but what they make me worry about is me, not the new ways;
I don’t want to get crusty and set in my ways.
See also http://www.douglasadams.com/dna/19990901-00-a.html
> I strongly believe there should be only one primary format for
> the web, HTML.
HTML is going exactly nowhere. All of the content I consume on
the web today is HTML. However most of the time it comes wrapped
in something that’s not `text/html`. What I see is just the
application chrome moving out of HTML and into structured formats
so that the content can stand pure and undiluted. I don’t see how
you can disagree that this is a good thing unless you prefer
visiting 300 sites in your browser over reading them in an
aggregator.
> > (In fact, with XMLHttpRequest you already can.)
>
> Actually, that's not true. Browsers don't natively display
> things using XMLHttpRequest, it is a capability that needs to
> be programmed. By your logic then Java applets and in some
> browsers ActiveX controls can all be interacted with via a
> browser. Tell me, how many grandmas are interacting with
> XMLHttpRequest?
Your examples are at the extreme end of the continuum. Javascript
is code run by the browser, but it’s much closer to “content” in
many characteristics than are applets and ActiveX controls.
Note that I consider XMLHttpRequest an interim solution only. I’m
not saying it’s the way things should work; however I *am* saying
that you can use to make these things *work* in the here and now.
Running code trumps theoretical purity.
> > > it requires web services be created in addition to web
> > > pages.
> > >
> > > I'd really advocate for exposing webservices as a default
> > > case via semantic HTML instead of AtomPub
> >
> > How is that different? A WordPress blog has a public face and
> > an admin area. Isn't it really kind of incidental what kind
> > of content types get exchanged on which half of the service?
>
> The differences are:
>
> • the requirement to develop seperately for multiple content
> types.
> • the high liklihood that the two efforts will result in output
> lacking 100% fidelity.
> • the high liklihood that the web service won't be developed
> even though the web site is.
> • that tools available for HTML, including browsers, will never
> in their entirety be available for AtomPub.
> • the fact that the default interaction for users on the web is
> with content that behaves like HTML
>
> Having it be a best practice where building a website means
> building a web service will ensure many more web services
> actually get built. […] The only way it is viable in my world
> view is if the CMS that is generating the content type
> generates them with 100% fidelity without user/admin
> involvement
I think you are looking at it from the wrong angle, but Atompub
is new and I guess it’s understandable that people have trouble
imagining how their instincts would be changed by a world in
which it was already ubiquitous.
Here’s the point (again): Atompub is infrastructure.
Think for a moment about a world in which plenty of ready-made
implementations exist as pluggable libraries or frameworks, or
maybe even fullblown servers on the scale of Apache. (OK, the
latter is less likely.) In this world, no one would think of the
task as “writing an Atompub implementation alongside the HTML
interface.” You would use an Atompub imlpementation to build the
plumbing of your application, and then write the HTML interface
as a client application on top of the Atompub service. The
Atompub service just falls out of that for free.
> you mention that AtomPub handles things for you so you don't
> have to. I've read the AtomPub spec, although I don't know how
> awake I was reading it considering how long it is, and I didn't
> see anything earth shattering. I see AtomPub as a very valuable
> albeit specialty protocol/format. It seems you see it for
> general purpose use. Can you tell me specifically what it is
> about AtomPub that causes you to value its use outside the
> nominal case of publishing?
>
> Maybe it is a panacea and I just don't yet see it. Here's your
> chance, convert me.
The role in which I see Atompub is retrofitting HTTP with a
notion of collections. In so doing it sets up expectations for
clients about how the mechanics of creating resources on the
server will work.
The only means for changing state on the server with a browser is
HTML forms and POST. But these forms and POST requests are mute
and featureless; the browser has no idea whatsoever about the
meaning of what it’s doing. You need a human to drive it.
With Atompub, the client is no longer just putting forth its fist
with eyes shut tight, murmuring “here’s some data, I don’t know
what you want it for.” It can actually have an intent for that
data and a notion of what is going to happen with it.
Consider Amazon S3; it’s such a simple service that if it had
happened 5 years later the documentation would consist of “see
RFC <whatever Atompub will be assigned>” and no one would be
writing custom clients for it.
But that only scratches the surface. Think about how ubiquitous
the notion of collections is. To quote Bill de hÓra:
<http://www.dehora.net/journal/2007/07/shipping_notes.html>:
AtomPub sits in a very strange place, as it has the potential
to disrupt half a dozen or more industry sectors, such as,
Enterprise Content Management, Blogging, Digital/Desktop
Publishing and Archiving, Mobile Web, EAI/WS-* messaging,
Social Networks, Online Productivity tools. As interesting as
the adoption rates, will be people and sectors finding
reasons not use it to protect distribution channels and data
lockins with more complicated solutions. Any kind of data
garden is fair game for AtomPub to rationalize.
This is much, much bigger than publishing systems.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote:
> No, not the *same* problem just the *application-specific*
> part of the problem. The common parts of the problem are
> factored out to Atompub.
>
> But thats a given; Atompub is infrastructure, just like HTTP
> itself. The fact that Atompub alone is not enough to model
> the full semantics of the application is no more to the point
> than the fact that HTTP isnt either. Yet I dont see people
> frowning and saying well HTTP doesnt really solve my
> problem so whats the point lets make a new TCP-based wire
> protocol.
Sigh. I agree with your points here, but I can't seem to get you to
understand mine. Maybe I'm just too tired and we'll just have to leave my
points for another day? :)
> (Is there an echo in here? Am I repeating myself? :)
>
> No, youre not. I see two different statements. If the latter
> is what you meant by the first, then the first one was not
> explicit enough for me to understand it clearly.
That was a joke. Are you German?
FYI, I've got strong German heritage ("Schinkel"), so that comment was
self-depreciating as much as anything. :)
> Considering the latter argument, though, Im not sure how
> just having a browser solves the problem that Atompub
> purportedly does not. Is it because HTML bundles the app
> chrome alongside the data, which Atompub does not?
One reason is that web-services-on-HTML (WSO-HTML?) can empower "happy
accidents" whereas that is unlikely for AtomPub anytime in the foreseeable
future. IOW, people surfing a combine website/web-service are more apt to
accidentally visualize how they can write web services to interact with the
site whereas most likely only people who pre-envision using web services
will be likely to move forward with building Web Services from AtomPub.
And for those who do decide to pursue web services being able to surf the
web service only requires the browser. "Surfing" an AtomPub service will
require specialized clients. The more "surfable" a web service is, the more
approachable it is IMO.
Not to mention the fact that building WSO-HTML means that only one project
is required to be approved by its sponsor (company, government agency,
department, manager, etc.) instead of two, albeit one with more constraints.
> If so, I dont see this as a strong objection. I assume we
> will see technologies other than HTML (eg. XForms) or
> complementary to HTML (eg. you an HTML page generated by an
> XSL transform referred to from an xml-stylesheet PI in the
> Atompub service document assuming browsers had support for
> PUT and DELETE, and/or your server knew how to tunnel them
> through POST for legacy browser).
Good. :)
> That sounds to me like it doesnt work the way it always
> used to and that freaks me out. There are developments that
> weird me out too, but what they make me worry about is me,
> not the new ways; I dont want to get crusty and set in my ways.
No, it's more like recognizing Jakob's law:
http://notebook.arkane-systems.net/index.php/Jakob's_Law_of_the_Web_User_Exp
erience
> See also http://www.douglasadams.com/dna/19990901-00-a.html
Funny, my Well Designed URLs initiative [1] is consistent with that...
> > I strongly believe there should be only one primary format for the
> > web, HTML.
>
> HTML is going exactly nowhere. All of the content I consume
> on the web today is HTML. However most of the time it comes
> wrapped in something thats not `text/html`. What I see is
> just the application chrome moving out of HTML and into
> structured formats so that the content can stand pure and
> undiluted. I dont see how you can disagree that this is a
> good thing unless you prefer visiting 300 sites in your
> browser over reading them in an aggregator.
I don't follow your vision for the future well enough to agree or disagree.
> Your examples are at the extreme end of the continuum.
> Javascript is code run by the browser, but its much closer
> to content in many characteristics than are applets and
> ActiveX controls.
The extreme ends are exactly what I am trying to advocate for.
And your points are simply drawing rationalizing distinctions rather than
recognize the distinction I was making regarding those thing people can
retrieve and view with a browser and those things that require other tools
and/or are hidden and require programming skill to utilize during the
browsing experience (i.e. Javascript)
> Note that I consider XMLHttpRequest an interim solution only.
> Im not saying its the way things should work; however I
> *am* saying that you can use to make these things *work* in
> the here and now.
You can. I can. MOST people CANNOT.
> Running code trumps theoretical purity.
And how does that related to what we are discussing?
> I think you are looking at it from the wrong angle, but
> Atompub is new and I guess its understandable that people
> have trouble imagining how their instincts would be changed
> by a world in which it was already ubiquitous.
I am generally the one who tries to understand why others don't have the
vision, so it is hard for me to take that comment without feeling a tad
defensive.
There are two reasons to mention here why I think HTML is important; 1.)
Jakobs Law [1] and 2.) The "view source effect." People are much more apt
to "view source" and discover web services on HTML pages they happen to be
surfing than they are apt to view source on AtomPub resources that they
don't happen to be surfing.
Don't get me wrong. There is a great chance AdamPub will represent the
professional end of web services as you likely envision and that for those
who are serious will gravitate to AtomPub in addition to WSO-HTML. Many who
start with WSO-HTML would likely then grativate to AtomPub as they gained
success. But in my world view WSO-HTML empowers everyman to create web
services, at least read-only web services.
> Heres the point (again): Atompub is infrastructure.
Fine. So is HTML. One should not obviate the other.
> Think for a moment about a world in which plenty of
> ready-made implementations exist as pluggable libraries or
> frameworks, or maybe even fullblown servers on the scale of
> Apache. (OK, the latter is less likely.) In this world, no
> one would think of the task as writing an Atompub
> implementation alongside the HTML interface. You would use
> an Atompub imlpementation to build the plumbing of your
> application, and then write the HTML interface as a client
> application on top of the Atompub service. The Atompub
> service just falls out of that for free.
Are you saying that people would build AtomPub-based websites and then apply
an HTML layer? While I agree that this might be likely in larger
enterprises or serious web business I highly doubt we'll ever see the 80% of
the bottom of the pyramid doing that because to them it is an unnecessary
level of indirection. And that 80% is the part of the pyramid that
interests me most.
> The role in which I see Atompub is retrofitting HTTP with a
> notion of collections. In so doing it sets up expectations
> for clients about how the mechanics of creating resources on
> the server will work.
Okay, I'll buy that. One of the big problems with VBScript on ASP was its
lack of usable collections.
> The only means for changing state on the server with a
> browser is HTML forms and POST. But these forms and POST
> requests are mute and featureless; the browser has no idea
> whatsoever about the meaning of what its doing. You need a
> human to drive it.
Accepted.
> With Atompub, the client is no longer just putting forth its
> fist with eyes shut tight, murmuring heres some data, I
> dont know what you want it for. It can actually have an
> intent for that data and a notion of what is going to happen with it.
Okay, but can you give me specific examples? "Show me the code!" :)
> Consider Amazon S3; its such a simple service that if it had
> happened 5 years later the documentation would consist of
> see RFC <whatever Atompub will be assigned> and no one
> would be writing custom clients for it.
My initiatve [2] with Alan Dean (that I haven't had time for in quite a
while but do plan to return to) has a goal of minimizing custom clients.
> But that only scratches the surface. Think about how
> ubiquitous the notion of collections is. To quote Bill dehra:
>
> <http://www.dehora.net/journal/2007/07/shipping_notes.html>:
> AtomPub sits in a very strange place, as it has the potential
> to disrupt half a dozen or more industry sectors, such as,
> Enterprise Content Management, Blogging, Digital/Desktop
> Publishing and Archiving, Mobile Web, EAI/WS-* messaging,
> Social Networks, Online Productivity tools. As interesting as
> the adoption rates, will be people and sectors finding
> reasons not use it to protect distribution channels and data
> lockins with more complicated solutions. Any kind of data
> garden is fair game for AtomPub to rationalize.
>
> This is much, much bigger than publishing systems.
Okay. That said, what's wrong with applying the notice of collections to
semantic HTML?
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org - http://t.oolicio.us
[1] http://www.welldesignedurls.org
[2] http://simplewebservices.org
--- In rest-discuss@yahoogroups.com, Gregg Wonderly <gergg@...> wrote: > > Nick Gall wrote: > > I am moving this permathread > > <http://tech.groups.yahoo.com/group/service-orientated-architecture/message/8641?var=1&l=1> > > from the SOA Yahoo Group > > <http://tech.groups.yahoo.com/group/service-orientated-architecture/> to > > the REST-discuss group. > > > > In essence, Gregg is arguing that the MODBUS > > <http://en.wikipedia.org/wiki/Modbus> protocol (see this MODBUS tutorial > > <http://www.lammertbies.nl/comm/info/modbus.html>) is RESTful. See the > > permathread for more context. > > My specific argument was about it being uniform. The fact that the definition > of REST if defined by other less concrete terms such as Generality and > Hypermedia is a separate issue. > > Gregg Wonderly > But the meaning of "uniform" in the context of REST is much more specific than the dictionary definition of "uniform". In the context of REST and especially in the context of Roy's thesis, "uniform interface" is defined precisely by the four constraints that you yourself suggested the "discussion might be better focused"! Since hypermedia is the most important of those four essential constraints that define the very meaning of "uniformity" in the context of REST, hypermedia is hardly "a separate issue". Hypermedia goes to the very heart of the meaning of "uniform interface" in the thesis. In the REST usage, an interface CANNOT be "uniform" unless it uses hypermedia as a resource representation. -- Nick
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Peter,
thanks for introducing me to XML-DSIG by showing how one can use it
to sign my foaf file. (see end of this email)
Putting on my RESTful and RDF glasses make me think that that
solution takes what would be termed the SOAPish turn: it tries to
envelop the content instead of referring to it. In the example
described at:
http://blogs.sun.com/bblfish/entry/cryptographic_web_of_trust
there is a URL for me
http://bblfish.net/people/henry/card#me
which one can HTTP GET information for by fetching
http://bblfish.net/people/henry/card
which returns one of the alternate representations
http://bblfish.net/people/henry/card.rdf
http://bblfish.net/people/henry/card.n3
The signatures for those representations are in other files, also
accessible via URLs namely
http://bblfish.net/people/henry/card.rdf.asc
http://bblfish.net/people/henry/card.n3.asc
By doing this we have the following advantages:
1- we can identify every object clearly by a URL. This works
nicely with the web caches, and is a good separation of concerns. We
have URLs for each representation, urls for me, urls for the signature
2- HTTP provides a clear distinction between the envelope and the
content. In the XML-DSIG example, is the content the XML-DSIG
wrapper, or is it the encoded N3 file?
3- separation of concerns: people only need to download the
signature and my public key if it is of interest to them. Perhaps if
there is something suspicious in the rdf content...
Now the disadvantage of the solution I proposed is that the caches
might end up returning a stale copy of the pgp signature. XML-DSIG
bypasses that problem of course because it sends the content and the
signature simultaneously. HTTP could solve the problem by sending the
signature in the header too, though that would clearly be cumbersome.
One simple solution is to specify the etag of the signature in the
card rdf:
<http://bblfish.net/people/henry/card.n3>
wot:assurance <http://bblfish.net/people/henry/card.n3.asc> ;
awol:type "text/rdf+n3" .
<http://bblfish.net/people/henry/card.n3.asc>
xxx:etag "13b3-ba-56463740";
xxx:content-length 186 .
Now a client that would get card.n3 would know that if it did an HTTP
GET on card.n3.asc which did not have
that etag or content-length, or last updated date, that the two
representations were in some way out of sync.
Currently they are not:
hjs@bblfish:0$ curl -I http://bblfish.net/people/henry/card.n3.asc
HTTP/1.1 200 OK
Date: Mon, 13 Aug 2007 19:29:22 GMT
Server: Apache/2.0.55 (Unix) DAV/2 mod_perl/2.0.2 Perl/v5.8.4
Last-Modified: Fri, 10 Aug 2007 11:04:21 GMT
ETag: "13b3-ba-56463740"
Accept-Ranges: bytes
Content-Length: 186
Content-Type: text/plain
This is about as much as I can say about XML-DSIG as a novice in
cryptography. I will try to look at it in more detail.
On 11 Aug 2007, at 07:56, Peter Williams wrote:
> See below:
>
> I (counter) signed your entire file, using XML-DSIG (with SAML-
> defined security semantics, as signaled).
Thanks, that is a nice introduction to XML-DSIG.
> I treated the FOAF file as a string-form of a (rather long) name,
> which bears its naming architcture, its naming contexts, its naming
> schema, its naming relationships, and its new name protections.
It looks like one should be able to extract a good ontology from the
above, in the spirit of WOT, or as an enhancement of WOT. Just a few
names to be added to http://xmlns.com/wot/0.1/
As shown in the article
http://blogs.sun.com/bblfish/entry/cryptographic_web_of_trust
the advantage of rdf vocabularies, is that they can be used in many
different contexts, in a very flexible manner.
> If one treats the FOAF file as a text stream, I dont see why one
> cannot similarly encode and then sign the N3 form. The XML form of
> the RDF seems to be adding little.
Indeed the XML form and the N3 form are just alternates of one
another, as I stated in the example
<http://bblfish.net/people/henry/card> a foaf:PersonalProfileDocument;
iana:alternate <http://bblfish.net/people/henry/card.rdf>,
<http://bblfish.net/people/henry/card.n3> .
They represent exactly the same graph. Indeed the xml is generated
automatically from the N3 using
cwm card.n3 --rdf > card.rdf
>
> ________________________________
>
> From: general-bounces@... on behalf of Story Henry
> Sent: Fri 8/10/2007 7:11 AM
> To: Steven Livingstone
> Cc: foaf-dev; OpenID General
> Subject: Re: [OpenID] cryptographics web of trust
>
>
>
> Thanks for the feedback. I have extended the blog post to describe
> how one can link up to other people's public keys, sign their public
> keys, and how one can sign parts of one's foaf file, using Dan
> Brickley's and Tim Berners' Lee as examples.
>
> This develops a very powerful web of trust.
>
> http://blogs.sun.com/bblfish/entry/cryptographic_web_of_trust
>
> Henry
>
>
> On 9 Aug 2007, at 20:15, Steven Livingstone wrote:
>
>> Very cool.
>>
>> I did some work in encrypting FOAF files a few years back (well,
>> hacked something together in a few hours).
>> http://www.ecademy.com/node.php?id=4568
>>
>> I checked and it is still there:
>> http://livz.org/encrypt/PrivateFoaf.aspx
>>
>> With the FOAF URL :
>> http://www.ecademy.com/module.php?mod=network&op=foafrdf&uid=21584
>> and searching for the name "Robert Sullivan" and a password
>> "steven", you get my decrypted FOAF file.
>>
>> The limiting part of it all (to make it really easy) was the fact
>> you needed an identity "Robert Sullivan" and a shared secret
>> "steven" - this is why OpenID is so powerful. With an authenticated
>> OpenID, you would be able to decrypt the FOAF file automatically.
>>
>> I figured at the time that some online identity (which didn't
>> really exist) could easily be mapped to a corresponding public key,
>> allowing you to encrypt parts of your FOAF files (or any other
>> file) for specific users.
>>
>> I hadn't spent too much time on it but i'd sure like to see it move
>> forward in some way.
>>
>> I know there has been other work put into this stuff as well:
>> http://usefulinc.com/foaf/encryptingFoafFiles
>>
>> steven
>> http://livz.org <http://livz.org/>
>>
>>
>>> To: general@...; foaf-dev@...-project.org
>>> From: henry.story@...
>>> Date: Thu, 9 Aug 2007 18:31:57 +0200
>>> Subject: [OpenID] cryptographics web of trust
>>>
>>> Hi, following some of the conversations I had on the openid
>> forums, I
>>> have read up about web security and used that new gained
>> knowledge to
>>> enhance my foaf file with a link to my public PGP key and used that
>>> to sign my foaf file. Using this it is easy to see how one can
>> create
>>> a semantic cryptographic web of trust.
>>>
>>> http://blogs.sun.com/bblfish/entry/cryptographic_web_of_trust
>>>
>>> There is a lot more to add for sure, but this is a good starting
>>> point. Great fun too.
>>>
>>> Henry Story
>>> _______________________________________________
>>> general mailing list
>>> general@...
>>> http://openid.net/mailman/listinfo/general
>>
>>
>> See what you're getting into...before you go there See it!
>
> _______________________________________________
> general mailing list
> general@...
> http://openid.net/mailman/listinfo/general
>
>
>
>
> <samlp:Response Destination="http://localhost:9030/sp/ACS.saml2"
> InResponseTo="_KrYhdmh3KExWfP5o0CAs7C9mfi"
> IssueInstant="2007-08-11T05:45:26.614Z" ID="_JbuqXO6H-
> BQIoeYwpd0NIE88d6" Version="2.0"
> xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"><saml:Issuer
> xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">http://
> www.acmemls.com/request-auth.jsp</saml:Issuer><ds:Signature
> xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
> <ds:SignedInfo>
> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-
> exc-c14n#"/>
> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/
> xmldsig#rsa-sha1"/>
> <ds:Reference URI="#_JbuqXO6H-BQIoeYwpd0NIE88d6">
> <ds:Transforms>
> <ds:Transform Algorithm="http://www.w3.org/2000/09/
> xmldsig#enveloped-signature"/>
> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
> </ds:Transforms>
> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
> <ds:DigestValue>TOs5pUtgy8p2wiQjXJuRfxa2224=</ds:DigestValue>
> </ds:Reference>
> </ds:SignedInfo>
> <ds:SignatureValue>
> ctUDU/+NwF7GwNPlGa184G8a5BfnIi1Nmzp8uKCZ93T8gDJVKRBbJDzhhnZ8EF2Y9G
> +PpPvIWW7b
> Oq/wmW8iYg==
> </ds:SignatureValue>
> </ds:Signature><samlp:Status><samlp:StatusCode
> Value="urn:oasis:names:tc:SAML:2.0:status:Success"/></
> samlp:Status><saml:Assertion Version="2.0"
> IssueInstant="2007-08-11T05:45:26.786Z"
> ID="eK2qsvd9xzsmzN7Z_V8sb08fqO-"
> xmlns:saml="urn:oasis:names:tc:SAML:
> 2.0:assertion"><saml:Issuer>http://www.acmemls.com/request-
> auth.jsp</saml:Issuer><saml:Subject><saml:NameID
> Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">%0d%
> 0a%3c!--+Processed+by+Id%3a+cwm.py%2cv+1.194+2007-08-06+16%3a13%3a56
> +syosi+Exp+--%3e%0d%0a%3c!--+++++using+base+file%3a%2f%2f%2fUsers%
> 2fhjs%2fDocuments%2fcard%2fcard.n3--%3e%0d%0a%0d%0a%0d%0a%3crdf%
> 3aRDF+xmlns%3d%22http%3a%2f%2fxmlns.com%2ffoaf%2f0.1%2f%22%0d%0a+++
> +xmlns%3aawol%3d%22http%3a%2f%2fbblfish.net%2fwork%2fatom-owl%
> 2f2006-06-06%2f%23%22%0d%0a++++xmlns%3acontact%3d%22http%3a%2f%
> 2fwww.w3.org%2f2000%2f10%2fswap%2fpim%2fcontact%23%22%0d%0a++++xmlns
> %3afoaf%3d%22http%3a%2f%2fxmlns.com%2ffoaf%2f0.1%2f%22%0d%0a+++
> +xmlns%3ageo%3d%22http%3a%2f%2fwww.w3.org%2f2003%2f01%2fgeo%
> 2fwgs84_pos%23%22%0d%0a++++xmlns%3aiana%3d%22http%3a%2f%
> 2fwww.iana.org%2fassignments%2frelation%2f%22%0d%0a++++xmlns%3ardf%
> 3d%22http%3a%2f%2fwww.w3.org%2f1999%2f02%2f22-rdf-syntax-ns%23%22%0d
> %0a++++xmlns%3ardfs%3d%22http%3a%2f%2fwww.w3.org%2f2000%2f01%2frdf-
> schema%23%22%0d%0a++++xmlns%3awot%3d%22http%3a%2f%2fxmlns.com%2fwot%
> 2f0.1%2f%22%3e%0d%0a%0d%0a++++%3cPersonalProfileDocument+rdf%3aabout
> %3d%22http%3a%2f%2fbblfish.net%2fpeople%2fhenry%2fcard%22%3e%0d%0a++
> ++++++%3ciana%3aalternate+rdf%3aresource%3d%22http%3a%2f%
> 2fbblfish.net%2fpeople%2fhenry%2fcard.n3%22%2f%3e%0d%0a++++++++%
> 3ciana%3aalternate+rdf%3aresource%3d%22http%3a%2f%2fbblfish.net%
> 2fpeople%2fhenry%2fcard.rdf%22%2f%3e%0d%0a++++++++%3cmaker+rdf%
> 3aresource%3d%22http%3a%2f%2fbblfish.net%2fpeople%2fhenry%2fcard%
> 23me%22%2f%3e%0d%0a++++++++%3cprimaryTopic+rdf%3aresource%3d%22http%
> 3a%2f%2fbblfish.net%2fpeople%2fhenry%2fcard%23me%22%2f%3e%0d%0a+++++
> +++%3ctitle%3eHenry+Story's+FOAF+file%3c%2ftitle%3e%0d%0a++++%3c%
> 2fPersonalProfileDocument%3e%0d%0a%0d%0a++++%3cPerson+rdf%3aabout%3d
> %22http%3a%2f%2fbblfish.net%2fpeople%2fhenry%2fcard%23me%22%3e%0d%0a
> ++++++++%3ccontact%3ahome+rdf%3aparseType%3d%22Resource%22%3e%0d%0a+
> +++++++++++%3ccontact%3aaddress+rdf%3aparseType%3d%22Resource%22%3e%
> 0d%0a++++++++++++++++%3ccontact%3acity%3eFontainebleau%3c%2fcontact%
> 3acity%3e%0d%0a++++++++++++++++%3ccontact%3acountry%3eFrance%3c%
> 2fcontact%3acountry%3e%0d%0a++++++++++++++++%3ccontact%3apostalCode%
> 3e77300%3c%2fcontact%3apostalCode%3e%0d%0a++++++++++++++++%3ccontact
> %3astreet%3e21+rue+Saint+Honore%3c%2fcontact%3astreet%3e%0d%0a++++++
> ++++++%3c%2fcontact%3aaddress%3e%0d%0a++++++++++++%3cgeo%3alat%
> 3e48.404532%3c%2fgeo%3alat%3e%0d%0a++++++++++++%3cgeo%3along%
> 3e2.700448%3c%2fgeo%3along%3e%0d%0a++++++++%3c%2fcontact%3ahome%3e%
> 0d%0a++++++++%3caimChatID%3eunbabelfish%3c%2faimChatID%3e%0d%0a+++++
> +++%3cbirthday%3e07-29%3c%2fbirthday%3e%0d%0a++++++++%
> 3ccurrentProject+rdf%3aresource%3d%22http%3a%2f%2fbblfish.net%2fwork
> %2fatom-owl%2f2006-06-06%2f%22%2f%3e%0d%0a++++++++%3ccurrentProject
> +rdf%3aresource%3d%22https%3a%2f%2fbloged.dev.java.net%2f%22%2f%3e%
> 0d%0a++++++++%3ccurrentProject+rdf%3aresource%3d%22https%3a%2f%
> 2fsommer.dev.java.net%2f%22%2f%3e%0d%0a++++++++%3cdepiction+rdf%
> 3aresource%3d%22http%3a%2f%2ffarm1.static.flickr.com%2f164%
> 2f373663745_1801c2dddf.jpg%3fv%3d0%22%2f%3e%0d%0a++++++++%
> 3cfamily_name%3eStory%3c%2ffamily_name%3e%0d%0a++++++++%3cgender%
> 3emale%3c%2fgender%3e%0d%0a++++++++%3cgivenname%3eHenry%3c%
> 2fgivenname%3e%0d%0a++++++++%3chomepage+rdf%3aresource%3d%22http%3a%
> 2f%2fbblfish.net%2f%22%2f%3e%0d%0a++++++++%3cknows+rdf%3aresource%3d
> %22http%3a%2f%2fdanbri.org%2ffoaf.rdf%23danbri%22%2f%3e%0d%0a+++++++
> +%3cknows+rdf%3aresource%3d%22http%3a%2f%2fdavelevy.info%2ffoaf.rdf%
> 23me%22%2f%3e%0d%0a++++++++%3cknows+rdf%3aresource%3d%22http%3a%2f%
> 2fpurl.org%2fcaptsolo%2fsemweb%2ffoaf-captsolo.rdf%23Uldis_Bojars%
> 22%2f%3e%0d%0a++++++++%3cknows+rdf%3aresource%3d%22http%3a%2f%
> 2ftorrez.us%2fwho%23elias%22%2f%3e%0d%0a++++++++%3cknows+rdf%
> 3aresource%3d%22http%3a%2f%2fweb.mac.com%2fthegearons%2fpeople%
> 2fPaulGearon%2ffoaf.rdf%23me%22%2f%3e%0d%0a++++++++%3cknows+rdf%
> 3aresource%3d%22http%3a%2f%2fwww.w3.org%2fPeople%2fBerners-Lee%
> 2fcard%23i%22%2f%3e%0d%0a++++++++%3cknows+rdf%3aresource%3d%22http%
> 3a%2f%2fwww.w3.org%2fPeople%2fConnolly%2f%23me%22%2f%3e%0d%0a+++++++
> +%3cknows+rdf%3aparseType%3d%22Resource%22%3e%0d%0a++++++++++++%
> 3crdf%3atype+rdf%3aresource%3d%22http%3a%2f%2fxmlns.com%2ffoaf%
> 2f0.1%2fPerson%22%2f%3e%0d%0a++++++++++++%3crdfs%3aseeAlso+rdf%
> 3aresource%3d%22http%3a%2f%2fwww.webmink.net%2ffoaf.rdf%22%2f%3e%0d%
> 0a++++++++++++%3cmbox_sha1sum%
> 3eee513cd82fea84825b803a44228fd9b765baf6d5%3c%2fmbox_sha1sum%3e%0d%
> 0a++++++++++++%3cname%3eSimon+Phipps%3c%2fname%3e%0d%0a++++++++%3c%
> 2fknows%3e%0d%0a++++++++%3cknows+rdf%3aparseType%3d%22Resource%22%3e
> %0d%0a++++++++++++%3crdf%3atype+rdf%3aresource%3d%22http%3a%2f%
> 2fxmlns.com%2ffoaf%2f0.1%2fPerson%22%2f%3e%0d%0a++++++++++++%3crdfs%
> 3aseeAlso+rdf%3aresource%3d%22http%3a%2f%2fdannyayers.com%2fme.rdf%
> 22%2f%3e%0d%0a++++++++++++%3cname%3eDanny+Ayers%3c%2fname%3e%0d%0a++
> ++++++%3c%2fknows%3e%0d%0a++++++++%3clogo+rdf%3aresource%3d%22%2fpix
> %2fbfish.large.jpg%22%2f%3e%0d%0a++++++++%3cmbox+rdf%3aresource%3d%
> 22mailto%3ahenry.story%40bblfish.net%22%2f%3e%0d%0a++++++++%3cmbox
> +rdf%3aresource%3d%22mailto%3ahenry.story%40gmail.com%22%2f%3e%0d%0a
> ++++++++%3cmbox+rdf%3aresource%3d%22mailto%3ahenry.story%40sun.com%
> 22%2f%3e%0d%0a++++++++%3cname%3eHenry+J.+Story%3c%2fname%3e%0d%0a+++
> +++++%3cnick%3ebblfish%3c%2fnick%3e%0d%0a++++++++%3copenid+rdf%
> 3aresource%3d%22http%3a%2f%2fbblfish.videntity.org%2f%22%2f%3e%0d%0a
> ++++++++%3copenid+rdf%3aresource%3d%22http%3a%2f%2fopenid.sun.com%
> 2fbblfish%22%2f%3e%0d%0a++++++++%3cpastProject+rdf%3aresource%3d%
> 22http%3a%2f%2fbabelfish.altavista.com%2f%22%2f%3e%0d%0a++++++++%
> 3cphone+rdf%3aresource%3d%22tel%3a%2b1-510-931-5491%22%2f%3e%0d%0a++
> ++++++%3cphone+rdf%3aresource%3d%22tel%3a%2b33-8-70-44-86-64%22%2f%
> 3e%0d%0a++++++++%3cschoolHomepage+rdf%3aresource%3d%22http%3a%2f%
> 2fwww.bbk.ac.uk%2fphil%2f%22%2f%3e%0d%0a++++++++%3cschoolHomepage
> +rdf%3aresource%3d%22http%3a%2f%2fwww.doc.ic.ac.uk%2f%22%2f%3e%0d%0a
> ++++++++%3cschoolHomepage+rdf%3aresource%3d%22http%3a%2f%
> 2fwww.kcl.ac.uk%2fkis%2fschools%2fhums%2fphilosophy%2f%22%2f%3e%0d%
> 0a++++++++%3ctitle%3eMr%3c%2ftitle%3e%0d%0a++++++++%3cweblog+rdf%
> 3aresource%3d%22http%3a%2f%2fbblfish.net%2fblog%2f%22%2f%3e%0d%0a+++
> +++++%3cweblog+rdf%3aresource%3d%22http%3a%2f%2fblogs.sun.com%
> 2fbblfish%2f%22%2f%3e%0d%0a++++++++%3cweblog+rdf%3aresource%3d%
> 22http%3a%2f%2fdel.icio.us%2fbblfish%22%2f%3e%0d%0a++++++++%
> 3cworkplaceHomepage+rdf%3aresource%3d%22http%3a%2f%2fsun.com%22%2f%
> 3e%0d%0a++++%3c%2fPerson%3e%0d%0a%0d%0a++++%3crdf%3aDescription+rdf%
> 3aabout%3d%22http%3a%2f%2fbblfish.net%2fpeople%2fhenry%2fcard.n3%22%
> 3e%0d%0a++++++++%3cawol%3atype%3etext%2frdf%2bn3%3c%2fawol%3atype%3e
> %0d%0a++++++++%3cwot%3aassurance+rdf%3aresource%3d%22http%3a%2f%
> 2fbblfish.net%2fpeople%2fhenry%2fcard.n3.asc%22%2f%3e%0d%0a++++%3c%
> 2frdf%3aDescription%3e%0d%0a%0d%0a++++%3crdf%3aDescription+rdf%
> 3aabout%3d%22http%3a%2f%2fbblfish.net%2fpeople%2fhenry%2fcard.rdf%
> 22%3e%0d%0a++++++++%3cawol%3atype%3eapplication%2frdf%2bxml%3c%
> 2fawol%3atype%3e%0d%0a++++++++%3cwot%3aassurance+rdf%3aresource%3d%
> 22http%3a%2f%2fbblfish.net%2fpeople%2fhenry%2fcard.rdf.asc%22%2f%3e%
> 0d%0a++++%3c%2frdf%3aDescription%3e%0d%0a%0d%0a++++%3crdf%
> 3aDescription+rdf%3aabout%3d%22http%3a%2f%2fdanbri.org%2fdanbri-
> pubkey.txt%22%3e%0d%0a++++++++%3cwot%3aassurance+rdf%3aresource%3d%
> 22danbri.pubkey.asc.asc%22%2f%3e%0d%0a++++%3c%2frdf%3aDescription%3e
> %0d%0a%0d%0a++++%3cPerson+rdf%3aabout%3d%22http%3a%2f%2fdanbri.org%
> 2ffoaf.rdf%23danbri%22%3e%0d%0a++++++++%3cname%3eDan+Brickley%3c%
> 2fname%3e%0d%0a++++%3c%2fPerson%3e%0d%0a%0d%0a++++%3cPerson+rdf%
> 3aabout%3d%22http%3a%2f%2fdavelevy.info%2ffoaf.rdf%23me%22%3e%0d%0a+
> +++++++%3cname%3eDave+Levy%3c%2fname%3e%0d%0a++++%3c%2fPerson%3e%0d%
> 0a%0d%0a++++%3cPerson+rdf%3aabout%3d%22http%3a%2f%2fpurl.org%
> 2fcaptsolo%2fsemweb%2ffoaf-captsolo.rdf%23Uldis_Bojars%22%3e%0d%0a++
> ++++++%3cname%3eUldis+Bojars%3c%2fname%3e%0d%0a++++%3c%2fPerson%3e%
> 0d%0a%0d%0a++++%3cPerson+rdf%3aabout%3d%22http%3a%2f%2ftorrez.us%
> 2fwho%23elias%22%3e%0d%0a++++++++%3cname%3eElias+Torres%3c%2fname%3e
> %0d%0a++++%3c%2fPerson%3e%0d%0a%0d%0a++++%3cPerson+rdf%3aabout%3d%
> 22http%3a%2f%2fweb.mac.com%2fthegearons%2fpeople%2fPaulGearon%
> 2ffoaf.rdf%23me%22%3e%0d%0a++++++++%3cname%3ePaul+Gearon%3c%2fname%
> 3e%0d%0a++++%3c%2fPerson%3e%0d%0a%0d%0a++++%3cPerson+rdf%3aabout%3d%
> 22http%3a%2f%2fwww.w3.org%2fPeople%2fBerners-Lee%2fcard%23i%22%3e%0d
> %0a++++++++%3cname%3eTim+Berners+Lee%3c%2fname%3e%0d%0a++++%3c%
> 2fPerson%3e%0d%0a%0d%0a++++%3cPerson+rdf%3aabout%3d%22http%3a%2f%
> 2fwww.w3.org%2fPeople%2fConnolly%2f%23me%22%3e%0d%0a++++++++%3cname%
> 3eDan+Connolly%3c%2fname%3e%0d%0a++++%3c%2fPerson%3e%0d%0a%0d%0a++++
> %3crdf%3aDescription%3e%0d%0a++++++++%3crdf%3atype+rdf%3aresource%3d
> %22http%3a%2f%2fxmlns.com%2fwot%2f0.1%2fPubKey%22%2f%3e%0d%0a+++++++
> +%3cwot%3afingerprint%3eE5C6CDCC5C1401B6EB2BC5EAED0BF9DBC7DEAB05%3c%
> 2fwot%3afingerprint%3e%0d%0a++++++++%3cwot%3ahex_id%3eC7DEAB05%3c%
> 2fwot%3ahex_id%3e%0d%0a++++++++%3cwot%3aidentity+rdf%3aresource%3d%
> 22http%3a%2f%2fbblfish.net%2fpeople%2fhenry%2fcard%23me%22%2f%3e%0d%
> 0a++++++++%3cwot%3alength+rdf%3adatatype%3d%22http%3a%2f%
> 2fwww.w3.org%2f2001%2fXMLSchema%23integer%22%3e1024%3c%2fwot%
> 3alength%3e%0d%0a++++++++%3cwot%3apubkeyAddress+rdf%3aresource%3d%
> 22http%3a%2f%2fbblfish.net%2fpeople%2fhenry%2fhenry.pubkey.asc%22%2f
> %3e%0d%0a++++%3c%2frdf%3aDescription%3e%0d%0a%0d%0a++++%3crdf%
> 3aDescription%3e%0d%0a++++++++%3crdf%3atype+rdf%3aresource%3d%22http
> %3a%2f%2fxmlns.com%2fwot%2f0.1%2fPubkey%22%2f%3e%0d%0a++++++++%3cwot
> %3ahex_id%3e9FC3D57E%3c%2fwot%3ahex_id%3e%0d%0a++++++++%3cwot%
> 3aidentity+rdf%3aresource%3d%22http%3a%2f%2fwww.w3.org%2fPeople%
> 2fBerners-Lee%2fcard%23i%22%2f%3e%0d%0a++++++++%3cwot%
> 3apubkeyAddress+rdf%3aresource%3d%22timbl.pubkey.asc%22%2f%3e%0d%0a+
> +++%3c%2frdf%3aDescription%3e%0d%0a%0d%0a++++%3crdf%3aDescription%3e
> %0d%0a++++++++%3crdf%3atype+rdf%3aresource%3d%22http%3a%2f%
> 2fxmlns.com%2fwot%2f0.1%2fPubKey%22%2f%3e%0d%0a++++++++%3cwot%
> 3ahex_id%3eB573B63A%3c%2fwot%3ahex_id%3e%0d%0a++++++++%3cwot%
> 3aidentity+rdf%3aresource%3d%22http%3a%2f%2fdanbri.org%2ffoaf.rdf%
> 23danbri%22%2f%3e%0d%0a++++++++%3cwot%3apubkeyAddress+rdf%3aresource
> %3d%22http%3a%2f%2fdanbri.org%2fdanbri-pubkey.txt%22%2f%3e%0d%0a++++
> %3c%2frdf%3aDescription%3e%0d%0a%3c%2frdf%3aRDF%3e</
> saml:NameID><saml:SubjectConfirmation
> Method="urn:oasis:names:tc:SAML:
> 2.0:cm:bearer"><saml:SubjectConfirmationData
> InResponseTo="_KrYhdmh3KExWfP5o0CAs7C9mfi"
> NotOnOrAfter="2007-08-11T05:50:26.833Z" Recipient="http://localhost:
> 9030/sp/ACS.saml2"/></saml:SubjectConfirmation></
> saml:Subject><saml:Conditions <http://localhost:9030/sp/ACS.saml2%
> 22/%3E%3C/saml:SubjectConfirmation%3E%3C/saml:Subject%3E%
> 3Csaml:Conditions> NotOnOrAfter="2007-08-11T05:50:26.817Z"
> NotBefore="2007-08-11T05:40:26.817Z"><saml:AudienceRestriction><saml:A
> udience>http://www.acmemls.com/request-auth.jsp</saml:Audience></
> saml:AudienceRestriction></saml:Conditions><saml:AuthnStatement
> AuthnInstant="2007-08-11T05:45:26.770Z"
> SessionIndex="eK2qsvd9xzsmzN7Z_V8sb08fqO-"><saml:AuthnContext><saml:Au
> thnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</
> saml:AuthnContextClassRef></saml:AuthnContext></
> saml:AuthnStatement></saml:Assertion></samlp:Response>
>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (Darwin)
iD8DBQFGwLXS7Qv528feqwURAvf1AJ9b3mWBn+Dn+6eE3Gdxx5kUKGpbeQCfccTV
ClZ6euUnZa9H3TSf273+99k=
=eZ/+
-----END PGP SIGNATURE-----
Dear people of rest-discuss: I work on the allmydata.org "Tahoe" peer-to-peer storage grid. http://allmydata.org We are about to release v0.5 of this software, which provides a distributed storage grid. The big new feature of v0.5 is that it provides a programmable API so that you can control the storage grid in a programming language of your choice without having to understand how the storage grid is implemented. We chose to offer a RESTful API with a few bits of JSON encoding here and there. This is the first time we've tried to design an API according to REST principles, and I would be very interested in feedback from the people of this group about how well this API fits the precepts of the REST approach. One particular question I have is: why the heck don't web browsers support PUT and DELETE actions? Our API can be used directly from a standard not-javascript-enabled web browser, but for that purpose we have to contort it to encode PUT and DELETE commands into POST commands so that the web browser will send the command to the web server. Thank you for your attention. Regards, Zooko
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "zooko" == zooko <zooko@...> writes:
zooko> One particular question I have is: why the heck don't web
zooko> browsers support PUT and DELETE actions? Our API can be
zooko> used directly from a standard not-javascript-enabled web
zooko> browser, but for that purpose we have to contort it to
zooko> encode PUT and DELETE commands into POST commands so that
zooko> the web browser will send the command to the web server.
The $1 million question. Note that XHR does support it, but because of
browser quirks, you can't rely on it, so misusing POST is
the only option.
- --
Cheers,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGwgzSIyuuaiRyjTYRAmydAJ4oL0R63T4//tam48YYTZw94CqRXwCg6pm7
gsNH5EkcmC8hfs8uPxVlV9s=
=jIB1
-----END PGP SIGNATURE-----
[ Attachment content not displayed ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Karen" == Karen <karen.cravens@...> writes:
Karen> On 8/14/07, Berend de Boer <berend@...> wrote:
>> The $1 million question. Note that XHR does support it, but
>> because of browser quirks, you can't rely on it, so misusing
>> POST is the only option.
Karen> You can't rely on it? Say it ain't so!
Karen> Does anyone know if any of the frameworks (jQuery!) work
Karen> around this in with their other browser-quirk-concealing
Karen> tricks? Because dang, I was *really* looking forward to
Karen> moving past the "dumb browser catering-to" part of my
Karen> project into the "JS-enabled browser
Karen> doing-things-straightforwardly" phase.
I asked the YUI guys why they didn't support PUT and DELETE, and they
answered it was because of the quirks. There's no work-around in these
cases. If you search the YUI developer forums you will find a link to
a page where you can test PUT/DELETE support and even among A grade
browsers, the support is just flaky.
- --
Live long and prosper,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGwimXIyuuaiRyjTYRAsTEAKDFAIvUiGkU+oXwxtmE9ZxWKcadqwCg3T+/
byq2Hks9jZCaMZQBuDvILJE=
=VVyZ
-----END PGP SIGNATURE-----
[ Attachment content not displayed ]
zooko wrote: > One particular question I have is: why the heck don't web browsers > support PUT and DELETE actions? History. IIRC, the very first browsers they had PUT and DELETE but used to "save" edits to HTML files. Hence while PUT and DELETE were defined as actions on URLs that affected the resource found there (a definition that became firmer and clearer later on) the actual use for them was not something that worked with forms. So while there's no reason why a form shouldn't be able to PUT or even DELETE, it wasn't how things were done. The move away from browsers having an editing function meant that PUT and DELETE were largely forgotten in terms of what browsers do.
Hi-
I have REST design question that hits on the topic of personalization,
but in a way I don't think has been discussed before.
I have representations that contain so much data that they have become
a processing bottleneck for clients. By processing I mean parsing and
dealing with the XML, so normal http caching isn't going to help.
What's been requested is that the clients can customize the
representation to contain only what they need. They want to be able to
say don't send me these nodes, only go this deep with those nodes,
etc. So, in essence, they want to personalize the representation.
I see a couple of options to accomplish this.
Assuming the URI is:
http://example.com/v1/foo/{id}
1) Have preset levels of granularity:
http://example.com/v1/foo/{id}/shortform
http://example.com/v1/foo/{id}/mediumform
http://example.com/v1/foo/{id} <--- default long form
2) Use a complex query string:
http://example.com/v1/foo/{id}?shownode1=true;node2depth=3
3) Allow the client to post a set of rules and get a separate URI
space for that rule set:
POST to http://example.com/v1/
shownode1=true
node2depth=3
Result:
201 Created
Location: /vi/{rulesetid}/
Then the client would traverse to:
http://example.com/v1/{rulesetid}/foo/{id}
Note: the rule set would not be unique to each client. If two clients
had the same rules, they would use the same URI.
Option 1 would be the easiest, but it's limiting due to the fixed set
of granularity levels.
Option 2 would require a rather complex query string but would provide
the flexibility.
Option 3 seems the slickest but I'm not sure if it's as RESTful as I
think it is.
Has anyone else had to deal with unusually large representations that
need to be trimmed down on a client by client basis? Personally, I'd
rather avoid the whole issue and add more hierarchy to the URI space
and have the clients act a little less dumb. That may be an option in
the end but for now this is all I have to work with so any opinions
will greatly appreciated.
Cheers,
Michael
mmakunas wrote:
> What's been requested is that the clients can customize the
> representation to contain only what they need. They want to be able to
> say don't send me these nodes, only go this deep with those nodes,
> etc. So, in essence, they want to personalize the representation.
I don't think it's helpful to think of this as a personalisation.
Personalisation means "give me this the way I like it.
This is more "give me this as follows".
> Option 3 seems the slickest but I'm not sure if it's as RESTful as I
> think it is.
I agree. It's not so much non-RESTful as not taking as much potential
advantage of REST as the other two.
> Has anyone else had to deal with unusually large representations that
> need to be trimmed down on a client by client basis?
Yes, and I've used both option 1 and 2 where they seemed useful.
> Personally, I'd
> rather avoid the whole issue and add more hierarchy to the URI space
> and have the clients act a little less dumb.
I wouldn't recommend so much more hierarchy as more resources in a
larger URI space - that may very well fit in with more hierarchy, but
I'd consider how hierarchical things are to be another question.
If http://example.com/v1/foo/{id} consists of links to representations
of the "nodes" rather than full information on them then the client can
decide which items are of interest to it.
Any RESTful design (whether human-readable or machine-readable) contains
a tension between the advantages and disadvantages of putting every
available piece of information into a single representation (in
human-readable applications that would be a one-page website) and of
splitting everything out as finely as possible. We strive to find
natural "lumps" of information. If your problem is that your lumps are
large, then you may just need more smaller lumps. The advantages are
greater if some of the finer grains are shared between more than one of
the larger "lumps" you have now, since this magnifies the advantages of
caching (both the caching inherent to REST and the possibility of
clients caching the objects they create when parsing).
Consider this me checking that something I'm doing isn't off the wall.
Right now I'm developing a domain registration and management system for work.
We'll be publishing the specs for it soon enough and it's not considered all
that important to keep it under lids, so I've no problems talking about it
here.
The problem I've ran into is what happens when one of the registries the
system talks to has some kind of transient fault or we're unable to connect
to the registry for some reason or another. We want the system to queue such
requests to retry them in a batch job later, so when this happens, the server
is meant to return a 202 Accepted response to the client. All fine and dandy.
The problem I've ran into, and combing the HTTP/1.1 spec isn't helping here,
is that it's not entirely clear how the server should indicate the location
to poll the status from. To me, it makes sense, unless the request was a batch
request (which, in this case, isn't so) to use the Location header. However,
the spec says:
"The entity returned with this response SHOULD include an indication of the
request's current status and either a pointer to a status monitor or some
estimate of when the user can expect the request to be fulfilled.
Which, to me, makes things a little awkward for the client.
The prior art I have to support my thoughs on this are Paul Prescod's
RESTmail[1] example and the REST goes to Maui[2] example on RestWiki. Can
anybody think of any reasons why I shouldn't go down this route?
K.
[1] http://www.prescod.net/rest/restmail/
[2] http://rest.blueoxen.net/cgi-bin/wiki.pl?RestGoesToMaui
--
Blacknight Internet Solutions Ltd. <http://blacknight.ie/>
Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland
Company No.: 370845
[ Attachment content not displayed ]
Jon Hanna wrote:
> mmakunas wrote:
> > Personally, I'd
> > rather avoid the whole issue and add more hierarchy to the URI space
> > and have the clients act a little less dumb.
>
> I wouldn't recommend so much more hierarchy as more resources in a
> larger URI space - that may very well fit in with more hierarchy, but
> I'd consider how hierarchical things are to be another question.
Agreed. More hierarchy wasn't a very good way for me to describe it. A
"richer" URI space what I really want.
> If http://example.com/v1/foo/ <http://example.com/v1/foo/>{id} consists
> of links to representations
> of the "nodes" rather than full information on them then the client can
> decide which items are of interest to it.
>
> Any RESTful design (whether human-readable or machine-readable) contains
> a tension between the advantages and disadvantages of putting every
> available piece of information into a single representation (in
> human-readable applications that would be a one-page website) and of
> splitting everything out as finely as possible. We strive to find
> natural "lumps" of information. If your problem is that your lumps are
> large, then you may just need more smaller lumps. The advantages are
> greater if some of the finer grains are shared between more than one of
> the larger "lumps" you have now, since this magnifies the advantages of
> caching (both the caching inherent to REST and the possibility of
> clients caching the objects they create when parsing).
My problem is that for a variety of reasons (not all of which I
understand or know yet) there are clients that want varying degrees of
lumpiness. Using the human-readable analogy, I need to provide for very
shallow websites with very few links, deep websites with a more natural
number of links, and website where certain links and lumps just aren't
there.
Right now, I'm leaning towards the query string approach. I think that
will let me design the URI space to it's most "natural" form, let
clients adjust that, and keep all the benefits of REST.
-Michael
In my desire to make RESTful web applications I've been very unhappy with the lack of full support for PUT and DELETE methods by todays browsers. (Is there any light visible in the tunnel on that?) I've been looking at work-arounds. Is it considered Ok to make a web application that sends GET/POST methods with a custom header, i.e. X-HTTP-METHOD? This seems a workable, reasonable stop-gap method until the ridiculous situation of modern browsers not fully supporting the protocol is finally improved. <rant barely suppressed> I invite your comments. Anyone else using this? Thoughts on it? Will it work in the real world? Any other work-arounds? etc. Cordially, Scott
For what it is worth, the allmydata.org Tahoe project just faced this same issue, and we worked-around it by creating forms that do POSTs with query arguments explaining what we really meant, in addition to supporting PUT and DELETE: http://allmydata.org/trac/tahoe/browser/docs/webapi.txt The forms and POSTs work in a standard web browser even without JavaScript. Presumably people using this API in a program will tend to use the actual PUT and DELETE. Regards, Zooko
Scott Chapman <scott_list@...> writes: > In my desire to make RESTful web applications I've been very unhappy > with the lack of full support for PUT and DELETE methods by todays > browsers. (Is there any light visible in the tunnel on that?) I've been > looking at work-arounds. > > Is it considered Ok to make a web application that sends GET/POST > methods with a custom header, i.e. X-HTTP-METHOD? > > This seems a workable, reasonable stop-gap method until the ridiculous > situation of modern browsers not fully supporting the protocol is > finally improved. <rant barely suppressed> > > I invite your comments. Anyone else using this? Thoughts on it? Will it > work in the real world? Any other work-arounds? etc. Look at REST-AHAH. I've tried your approach... it gets rejected quite often because many corp. proxies don't seem to pass through X- headers. -- Nic Ferrier http://prooveme.com - easy, simple, certificated OpenID
On Aug 17, 2007, at 3:32 PM, Scott Chapman wrote: > In my desire to make RESTful web applications I've been very unhappy > with the lack of full support for PUT and DELETE methods by todays > browsers. (Is there any light visible in the tunnel on that?) I've > been > looking at work-arounds. This seems to come up quite often. I fail to understand what people are complaining about. Javascript running within a browser context is inherently untrusted. Untrusted scripts should not be allowed to perform unsafe actions without user intervention, period. The only reason POST is allowed (and only to the host where the script came from, hopefully) is because there is no other method for sending payload-driven safe queries. It's like the farm kids I met once in Germany, who thought they could trick us suburban visitors into pissing on an electric fence. I don't know what they were trying to say to convince us it would be okay (not having much grasp of the language at age 12), but it sure didn't take much imagination to figure out the right answer. Instead of railing against the browser developers for their common sense restriction, we should be designing a framework under which a browser can know which methods are allowed silently, which methods are allowed with user confirmation, and which methods are not allowed at all. Then the framework can be defined with configurable defaults for browser context versus extension context, and with the ability to override those defaults under certain conditions. If someone takes the time to do all that, just as other people previously took the time to define security contexts for applets and related things, then I am sure the browser vendors will be happy to standardize on one security framework for XHR. ....Roy
[ Attachment content not displayed ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Roy" == Roy T Fielding <fielding@...> writes:
Roy> Untrusted scripts should not be allowed to perform unsafe
Roy> actions without user intervention, period. The only reason
Roy> POST is allowed (and only to the host where the script came
Roy> from, hopefully) is because there is no other method for
Roy> sending payload-driven safe queries.
So how is sending DELETE over POST any safer than just allowing DELETE?
Roy> Instead of railing against the browser developers for their
Roy> common sense restriction
The point is not that they restrict it, but that the implementations
don't fully support the HTTP standard. None of them restricts DELETE
or PUT in anyway.
- --
Cheers,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFGxn1KIyuuaiRyjTYRAn36AJ96eN1crcmQTJ2S8yP4tg6L100NvQCfdtwS
nkDV4p51EJsFo5tR6jWrHbw=
=pJVr
-----END PGP SIGNATURE-----
On Aug 17, 2007, at 7:13 PM, Karen wrote: > I can't speak for anyone else, but *I'm* chiefly complaining about > browsers being unable to do PUT/DELETE from a form *with* human > intervention. They *can* do it (sometimes anyway) in Javascript, > which kind of seems the opposite of what you're saying. Er, I thought that Mark convinced them to fix that a long time ago. I'm not up to date with the latest HTML discussions. Maybe I should get back in the fray. > Darned if I can figure out how to make an electric fence analogy > out of it, though. At least, not without resorting to an "In Soviet > Russia..." I would have described the bit where one of the kids proceeded to demonstrate by example, assuming that I just didn't understand what they were talking about. Of course, he missed. [Yes, that analogy really did occur in real life, when I was 12 and living for a month in a small town near Saarbrcken in West Germany, where the kids were just as much a bunch of characters as in any small town. Fortunately, we were the first American kids that they had met who could actually play soccer, after 6 years in AYSO, so we got along just fine.] ....Roy
On 8/18/07, Roy T. Fielding <fielding@...> wrote: > > On Aug 17, 2007, at 7:13 PM, Karen wrote: > > I can't speak for anyone else, but *I'm* chiefly complaining about > > browsers being unable to do PUT/DELETE from a form *with* human > > intervention. They *can* do it (sometimes anyway) in Javascript, > > which kind of seems the opposite of what you're saying. > > Er, I thought that Mark convinced them to fix that a long time ago. > I'm not up to date with the latest HTML discussions. Maybe I should > get back in the fray. I'm not sure exactly what is happening. Support for PUT and DELETE was part of the original WHATWG document: http://www.whatwg.org/specs/web-forms/2004-06-27-call-for-comments/#methodAndEnctypes but the current working draft is devoid of content: http://www.whatwg.org/specs/web-apps/current-work/#forms Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
Personally, I have no problem with using JS as a means of submitting forms; then I can do all sorts of new things: - use PUT and DELETE quite happily (where firewalls allow it) - send complex content - do validation The trouble is that javascript is still considered a freaky, funky thing to use. I know people who turn Javascript off in their browsers for fear of cross site scripting attacks. Screen readers don't seem to support it that well. I think addressing these two problems might solve more problems. We could use Javascript for complex functionality and not worry about the behavioural limitations of HTML too much. -- Nic Ferrier http://prooveme.com - easy, simple, certificated OpenID
[ Attachment content not displayed ]
Roy T. Fielding wrote: > On Aug 17, 2007, at 3:32 PM, Scott Chapman wrote: > >> In my desire to make RESTful web applications I've been very unhappy >> with the lack of full support for PUT and DELETE methods by todays >> browsers. (Is there any light visible in the tunnel on that?) I've >> been >> looking at work-arounds. >> > > This seems to come up quite often. I fail to understand what > people are complaining about. Javascript running within a browser > context is inherently untrusted. Untrusted scripts should not be > allowed to perform unsafe actions without user intervention, period. > The only reason POST is allowed (and only to the host where the > script came from, hopefully) is because there is no other method > for sending payload-driven safe queries. > Roy, Are you saying that XHR should not be allowed to do DELETE/PUT/POST without human intervention? What do you mean? Should the browser have a popup each time Google's search wants to do it's assist, for instance? ...or a configurable that has to be set, saying "foo.google.com" is "safe"? Regardless of the need for human intervention, if the browser XHR doesn't support the protocol, where are we? It seems to me that you designed the HTTP protocol to work this way. It's 2007 and they still don't "get it". 'Seems ridiculous to me! XHR does these things behind the scenes all the time, with no explicit action on the part of the user. I simply want all the HTTP protocol to be supported this way, rather than a subset. I should have clarified in my original post that this was only intended in an XHR context. Safari still doesn't do XHR PUT/DELETE as far as I know and I'm interested in XHR workarounds that work. Scott
On Aug 17, 2007, at 11:56 PM, Alan Dean wrote: > On 8/18/07, Roy T. Fielding <fielding@...> wrote: > > > > On Aug 17, 2007, at 7:13 PM, Karen wrote: > > > I can't speak for anyone else, but *I'm* chiefly complaining about > > > browsers being unable to do PUT/DELETE from a form *with* human > > > intervention. They *can* do it (sometimes anyway) in Javascript, > > > which kind of seems the opposite of what you're saying. > > > > Er, I thought that Mark convinced them to fix that a long time ago. > > I'm not up to date with the latest HTML discussions. Maybe I should > > get back in the fray. > > I'm not sure exactly what is happening. Support for PUT and DELETE was > part of the original WHATWG document: > > http://www.whatwg.org/specs/web-forms/2004-06-27-call-for-comments/ > #methodAndEnctypes > > but the current working draft is devoid of content: > > http://www.whatwg.org/specs/web-apps/current-work/#forms Yeah, that surprised me last night when I went and looked. It was annoying enough for me to submit the invited expert form, just to try and figure out what is going on at the W3C this time. And that new "ping" feature is the most vulnerable idea I've seen in years. WTF? ....Roy
On Aug 18, 2007, at 7:01 AM, Scott Chapman wrote:
> Are you saying that XHR should not be allowed to do DELETE/PUT/POST
> without human intervention?
Yes, when it is part of the script received in browser context
(i.e., part of the content received from another site, as opposed
to javascript running within an installed browser extension).
GET is the only method that can be safely used without explicit
agreement by the user. Part of that agreement can be a user-defined
configuration, such as "okay to use these methods to the same domain
as their source" (the applet security model), and part can be reasonable
confirmation dialogs "are you sure you want to delete {URL}?".
> What do you mean? Should the browser have a popup each time Google's
> search wants to do it's assist, for instance? ...or a configurable
> that
> has to be set, saying "foo.google.com" is "safe"?
There are many ways to do that, including reasonable defaults that
can be overridden for more (or less) secure environments.
Keep in mind that content received via the Web cannot be trusted, ever,
and the more software-as-a-service sites grow, the more vulnerable that
real people's data will become to actions hidden within scripts and
forms and any other mechanism capable of unsafe actions. Would you
allow any script to delete files on your local hard drive? What makes
your web-based iDisk or S3 or whatever any less important?
> Regardless of the need for human intervention, if the browser XHR
> doesn't support the protocol, where are we?
>
> It seems to me that you designed the HTTP protocol to work this way.
> It's 2007 and they still don't "get it". 'Seems ridiculous to me!
I am past that. There are browsers out there with known network
read errors (they can't see CRLF if it occurs on 512 byte boundaries,
for example). It really is astonishing how few developers care about
the communication layer -- all they care about is the GUI.
> XHR does these things behind the scenes all the time, with no explicit
> action on the part of the user. I simply want all the HTTP protocol to
> be supported this way, rather than a subset. I should have
> clarified in
> my original post that this was only intended in an XHR context. Safari
> still doesn't do XHR PUT/DELETE as far as I know and I'm interested in
> XHR workarounds that work.
I want a pony too. The point is that people complain about this all
the time, yet I've never seen one person take the time to actually
specify what would be safe behavior for all browsers to implement as
a standard. Mark is the only one I've seen even bother to try to
get the standards fixed, aside from my repeated expressions when
I was on the TAG (that went nowhere). Essentially, this runs core
to the failure of the W3C to do what it was originally created to do:
get the non-IETF parts of the Web in standard agreement.
Sure, that won't solve the implementation issue right away, but I can
guarantee that those issues will not be solved until the lack of
specification excuse is gone. Right now, the HTML "standard" claims
those methods don't even exist (a known bug).
....Roy
On Aug 17, 2007, at 10:02 PM, Berend de Boer wrote: > Roy> Untrusted scripts should not be allowed to perform unsafe > Roy> actions without user intervention, period. The only reason > Roy> POST is allowed (and only to the host where the script came > Roy> from, hopefully) is because there is no other method for > Roy> sending payload-driven safe queries. > > So how is sending DELETE over POST any safer than just allowing > DELETE? Not at all -- it is simply that someone needed POST in spite of the known security hole that was created by enabling it, and that point of view has prevailed. The same can't be said for DELETE. It is really easy to define a rational security model for all of these methods -- the hard part is getting all of the browser vendors to implement one model. > Roy> Instead of railing against the browser developers for their > Roy> common sense restriction > > The point is not that they restrict it, but that the implementations > don't fully support the HTTP standard. None of them restricts DELETE > or PUT in anyway. Umm, I remember discussion from 1994 that was specifically about restricting methods in HTML. Why each browser chose to not implement certain things is a mystery, but they were certainly aware of the methods at that time. IIRC, they simply had no agreed UI for indicating the distinction when the action is selected. Somebody has to define that for them before any such behavior can be standardized. ....Roy
On 8/18/07, Roy T. Fielding <fielding@...> wrote: > On Aug 17, 2007, at 11:56 PM, Alan Dean wrote: > > On 8/18/07, Roy T. Fielding <fielding@...> wrote: > > > > I'm not sure exactly what is happening. Support for PUT and DELETE was > > part of the original WHATWG document: > > > > http://www.whatwg.org/specs/web-forms/2004-06-27-call-for-comments/ > > #methodAndEnctypes > > > > but the current working draft is devoid of content: > > > > http://www.whatwg.org/specs/web-apps/current-work/#forms > Hixie has links there to WF2 and it says they are going to incorporate its content: http://www.whatwg.org/specs/web-forms/current-work/#methodAndEnctypes Hugh
Scott Chapman wrote: > ... > XHR does these things behind the scenes all the time, with no explicit > action on the part of the user. I simply want all the HTTP protocol to > be supported this way, rather than a subset. I should have clarified in > my original post that this was only intended in an XHR context. Safari > still doesn't do XHR PUT/DELETE as far as I know and I'm interested in > XHR workarounds that work. > ... I just tried BIND and PROPFIND with Safari 3 under Windows and they seem to work. So maybe this problem is gone. As far as I can tell, the only browser that is *really* challenged wrt to method name is Opera, which silently maps method names to GET. This problem has been known for minimally 6 months now, and I see no improvements in weekly builds (same with respect to XSLT, but that's another story). Best regards, Julian
Scott Chapman wrote: > ... > XHR does these things behind the scenes all the time, with no explicit > action on the part of the user. I simply want all the HTTP protocol to > be supported this way, rather than a subset. I should have clarified in > my original post that this was only intended in an XHR context. Safari > still doesn't do XHR PUT/DELETE as far as I know and I'm interested in > XHR workarounds that work. > > Scott I just tried BIND and PROPFIND with Safari 3 under Windows and they seem to work. So maybe this problem is gone. As far as I can tell, the only browser that is *really* challenged wrt to method name is Opera, which silently maps method names to GET. This problem has been known for minimally 6 months now, and I see no improvements in weekly builds (same with respect to XSLT, but that's another story). Best regards, Julian
Scott Chapman wrote: > I invite your comments. Anyone else using this? Thoughts on it? Will it > work in the real world? Any other work-arounds? etc. The problem is bigger than just getting verbs supported in HTTP clients; handling status codes is also "challenging": http://tinyurl.com/22kry2 -- Patrick Mueller http://muellerware.org
The fact that XHR automatically follows 302 doesn't help, either. On 8/20/07, Patrick Mueller <pmuellr@...> wrote: > Scott Chapman wrote: > > I invite your comments. Anyone else using this? Thoughts on it? Will it > > work in the real world? Any other work-arounds? etc. > > The problem is bigger than just getting verbs supported in HTTP clients; > handling status codes is also "challenging": > > http://tinyurl.com/22kry2 > > -- > Patrick Mueller > http://muellerware.org > > > > > Yahoo! Groups Links > > > >
Josh Sled wrote: > Keith Gaughan <keith@...> writes: > >> "The entity returned with this response SHOULD include an indication of the >> request's current status and either a pointer to a status monitor or some >> estimate of when the user can expect the request to be fulfilled. >> >> Which, to me, makes things a little awkward for the client. > > Why? Because it's one more thing the client has to know how to parse. Simple as that, really. K. -- Blacknight Internet Solutions Ltd. <http://blacknight.ie/> Unit 12A Barrowside Business Park, Sleaty Road, Graiguecullen, Carlow, Ireland Company No.: 370845
On 8/16/07, Josh Sled <jsled@...> wrote:
> Keith Gaughan <keith@...> writes:
> Can
> > anybody think of any reasons why I shouldn't go down this route?
>
> No.
>
>
> 202 Accepted
> Content-Type: application/json+our-service
>
> {"_type": "current-status",
> "message": "enqueued for next batch",
> "check-href" : "/deferred/batch/1/request/42",
> "try-again-after": {"_type": "relative-time", "value": "30", "units": "minutes"}
> }
>
>
> As for the awkardness...
>
> The custom entity format is about the same level of "awkwardness" as the fact
> that the 202 definition in HTTP doesn't mean the client should expect a URL
> in Location to contain anything special...
>
> But, does any client support either (entity-based or Location-based) approach
> to handling 202 responses? It's service-specific coupling in both cases,
> right?
>
I'm not sure it's very important to have a non-service-specific
solution, but instead of 202, you could return 303 See Other, with
Location set to the status URL. You could even return 503 Service
Unavailalble from the status URL, with a Retry-After header,if you
wanted to force generic user agents to poll.
Hi all, I am new to the subject and i have big question: Could i migrate my application from an object-oriented architecture to a restful one? In my application my clients are java based and my servers are C++ comet based application and communication is done thanks to corba. How can i adapt a rest desug in this case? My servers can not understand http request , Do i have absolutely develop an adptor in order to communicate? Does anyone have any experience like this? Thanks Samya
Just dropping a note off here to say that the slides from the REST Workshop I delivered at Burton Group's (my employer) Catalyst conference this year are available for free download. They're behind a registration wall, unfortunately, but worth the effort to download. It could help you in educating your colleagues and managers about REST. More info on my blog: http://wanderingbarque.com/nonintersecting/2007/08/22/rest-workshop-slides-available/ Good Luck, Pete
Hi Peter, as mentioned in slide 8: "Clients-not servers-are responsible for managing application state". Do you mean: Clients (not servers) are responsible for managing application state -- Teo Hui Ming
I had to read your question three times before I got what you were asking, but, yes, I do. Using dashes, the sentence could be read like this: Clients-not servers-are responsible for application state Parens are much more clear: Clients (not servers) are responsible for application state -Pete --- In rest-discuss@yahoogroups.com, "Teo Hui Ming" <teohuiming.work@...> wrote: > > Hi Peter, as mentioned in slide 8: "Clients-not servers-are > responsible for managing application state". Do you mean: Clients (not > servers) are responsible for managing application state > > -- > Teo Hui Ming >
Hi Folks, For those using WADL, there is a bug in the WADL XML Schema [1]. This needs to be removed(it's not a legal namespace declaration - the xml namespace is implicit, it may never be declared): xmlns:xml="http://www.w3.org/XML/1998/namespace" Once it's removed the schema works fine. /Roger [1]https://wadl.dev.java.net/wadl20061109.xsd (view page source to see the actual schema)
Blame ASCII and keyboards for not including the em dash :) "Traditionally, typewriters had only a single hyphen glyph, so it is common to use two monospace hyphens strung together--like this--to serve as an em dash." -- from http://en.wikipedia.org/wiki/Em_dash#Em_dash On Thu, Aug 23, 2007 at 11:14:38AM -0000, pete.lacey wrote: > I had to read your question three times before I got what you were asking, but, yes, I do. > > Using dashes, the sentence could be read like this: > > Clients-not > servers-are > responsible for application state > > Parens are much more clear: > > Clients > (not servers) > are responsible for application state > > -Pete > > > --- In rest-discuss@yahoogroups.com, "Teo Hui Ming" <teohuiming.work@...> wrote: > > > > Hi Peter, as mentioned in slide 8: "Clients-not servers-are > > responsible for managing application state". Do you mean: Clients (not > > servers) are responsible for managing application state > > > > -- > > Teo Hui Ming > > > > -- Paul Winkler http://www.slinkp.com
Paul Winkler wrote: > Blame ASCII and keyboards for not including the em dash :) Actually, I blame myself. PPT supports em dashes (type two hyphens and a character, and the hyphens are converted to em dashes [type one hyphen and a character for an en dash]), and I use them routinely, but not here for some reason. Oh well. --Pete
> For those using WADL, there is a bug in the WADL XML Schema [1]. > > This needs to be removed(it's not a legal namespace declaration - the > xml namespace is implicit, it may never be declared): > > xmlns:xml="http://www.w3.org/XML/1998/namespace" > > Once it's removed the schema works fine. > Hmm, I thought you could declare it but you have to use the correct namespace name. The namespace rec seems to confirm that, see [2]: "The prefix xml is by definition bound to the namespace name http:// www.w3.org/XML/1998/namespace. It may, but need not, be declared, and must not be bound to any other namespace name. Other prefixes must not be bound to this namespace name, and it must not be declared as the default namespace." Marc. > [1]https://wadl.dev.java.net/wadl20061109.xsd (view page source to see > the actual schema) [2] http://www.w3.org/TR/REC-xml-names/#xmlReserved --- Marc Hadley <marc.hadley at sun.com> CTO Office, Sun Microsystems.
Hi Marc, At least some of the XML Schema validators don't like it when you declare the XML namespace - an error is generated. There may be some validators that don't complain. Both validators that I used generated an error - xerces and xsv. /Roger ________________________________ From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Marc Hadley Sent: Friday, August 24, 2007 10:20 AM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re:Bug in WADL XML Schema > For those using WADL, there is a bug in the WADL XML Schema [1]. > > This needs to be removed(it's not a legal namespace declaration - the > xml namespace is implicit, it may never be declared): > > xmlns:xml="http://www.w3.org/XML/1998/namespace <http://www.w3.org/XML/1998/namespace> " > > Once it's removed the schema works fine. > Hmm, I thought you could declare it but you have to use the correct namespace name. The namespace rec seems to confirm that, see [2]: "The prefix xml is by definition bound to the namespace name http:// www.w3.org/XML/1998/namespace. It may, but need not, be declared, and must not be bound to any other namespace name. Other prefixes must not be bound to this namespace name, and it must not be declared as the default namespace." Marc. > [1]https://wadl.dev.java.net/wadl20061109.xsd <https://wadl.dev.java.net/wadl20061109.xsd> (view page source to see > the actual schema) [2] http://www.w3.org/TR/REC-xml-names/#xmlReserved <http://www.w3.org/TR/REC-xml-names/#xmlReserved> --- Marc Hadley <marc.hadley at sun.com> CTO Office, Sun Microsystems.
Marc Hadley wrote: >> For those using WADL, there is a bug in the WADL XML Schema [1]. >> >> This needs to be removed(it's not a legal namespace declaration - the >> xml namespace is implicit, it may never be declared): >> >> xmlns:xml="http://www.w3.org/XML/1998/namespace" >> >> Once it's removed the schema works fine. >> > Hmm, I thought you could declare it but you have to use the correct > namespace name. The namespace rec seems to confirm that, see [2]: Yep. If something is choking on xmlns:xml="http://www.w3.org/XML/1998/namespace" the bug is with the processor, not the schema.
* Jon Hanna <jon@...> [2007-08-24 16:35]: > If something is choking on > xmlns:xml="http://www.w3.org/XML/1998/namespace" the bug is > with the processor, not the schema. But it’s still redundant to declare it, so removing it seems like a cheap way to improve users’ lives. Next on the agenda would be filing a bug against the processors that complain, of course. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
--- In rest-discuss@yahoogroups.com, "A. Pagaltzis" <pagaltzis@...> wrote: > This is a contradiction in terms. REST is defined by two > properties: > > 1. Server state is exposed as a set of resources that have a > uniform interface and are named by URIs. > > 2. The client doesn’t make any assumptions about the server URI > space; all it does is follow links. I am currently defining an interface using the REST style. What i do is discover the resources and the representations of those resources. I have a test specification, i would like to run that test and i want test reports. Resources: TestSpec, TestRun, TestReport, TestSpecs, TestRuns, TestReports. Example: POST http://<host/Test/TestSpecs <TestSpec>...</TestSpec> return 201 Location: http://<host>/Test/TestSpecs/TestSpec1.xml POST http://<host>/Test/TestRuns <TestRun @TestSpec="http://<host>/Test/TestSpecs/TestSpec1.xml/> return 201 Location: http://<host>/Test/TestRuns/TestRun1.xml GET http://<host>/Test/TestReports In my case the client does make assumptions of where to find the resources, but does this make the interface less RESTful? Regards Roger van de Kimmenade
I have an webservice interface that can return a list of all schemas,
example:
GET http://www.test.com/schemas
return: 200,
<ul>
<li>
<namespace>http://www.test.com/schema1/v1.0</namespace>
<uri>http://www.test.com/schemas/schema1/v1.0/schema1.xml</uri>
</li>
</ul>
However I would like to get a schema based on a namespace.
I see (at least) 2 solutions:
1) Using a parameter at the schemas resource
GET
http://www.test.com/schemas?namespace="http://www.test.com/schema1/v1.0"
2) Using a separate resource with parameter
GET
http://www.test.com/schemas/schema?namespace="http://www.test.com/schema1/v1.0"
The advantage of 1 is that it seems to be more natural
The disadvantage is that it can return two different XML formats.
Without the query parameter it returns a list and with the parameter
it returns a schema. At least that is what i want it to return. I
could also go for a list of 1 resource.
The advantage of 2 is that the schema returns always a schema.
The disadvantage is that it looks clumsy.
Any suggestions?
Regards,
Roger van de Kimmenade
On 9/10/07, rogervdkimmenade <rvdkimmenade@...> wrote: > I have an webservice interface that can return a list of all schemas, > example: > GET http://www.test.com/schemas > return: 200, > <ul> > <li> > <namespace>http://www.test.com/schema1/v1.0</namespace> > <uri>http://www.test.com/schemas/schema1/v1.0/schema1.xml</uri> > </li> > </ul> > > However I would like to get a schema based on a namespace. > I see (at least) 2 solutions: > > 1) Using a parameter at the schemas resource > GET > http://www.test.com/schemas?namespace="http://www.test.com/schema1/v1.0" > > 2) Using a separate resource with parameter > GET > http://www.test.com/schemas/schema?namespace="http://www.test.com/schema1/v1.0" > > The advantage of 1 is that it seems to be more natural > The disadvantage is that it can return two different XML formats. > Without the query parameter it returns a list and with the parameter > it returns a schema. At least that is what i want it to return. I > could also go for a list of 1 resource. > > I don't see any disadvantage to 1. The url http://www.test.com/schemas is a distinct url from http://www.test.com/schemas?namespace="http://www.test.com/schema1/v1.0" and it identifies a different resource. > The advantage of 2 is that the schema returns always a schema. > The disadvantage is that it looks clumsy. > > Any suggestions? > > Regards, > Roger van de Kimmenade > > > > > > Yahoo! Groups Links > > > > -- Hugh
I'd like to extend Rogers original question and ask if anyone has any opinion about utilizing the HEAD verb to return schemas & representation formats? Anyone have any experiences with it ? Thanks, Griffin On Sep 10, 2007, at 8:40 AM, Hugh Winkler wrote: > On 9/10/07, rogervdkimmenade <rvdkimmenade@...> wrote: > > I have an webservice interface that can return a list of all > schemas, > > example: > > GET http://www.test.com/schemas > > return: 200, > > <ul> > > <li> > > <namespace>http://www.test.com/schema1/v1.0</namespace> > > <uri>http://www.test.com/schemas/schema1/v1.0/schema1.xml</uri> > > </li> > > </ul> > > > > However I would like to get a schema based on a namespace. > > I see (at least) 2 solutions: > > > > 1) Using a parameter at the schemas resource > > GET > > http://www.test.com/schemas?namespace="http://www.test.com/ > schema1/v1.0" > > > > 2) Using a separate resource with parameter > > GET > > http://www.test.com/schemas/schema?namespace="http://www.test.com/ > schema1/v1.0" > > > > The advantage of 1 is that it seems to be more natural > > The disadvantage is that it can return two different XML formats. > > Without the query parameter it returns a list and with the parameter > > it returns a schema. At least that is what i want it to return. I > > could also go for a list of 1 resource. > > > > > > I don't see any disadvantage to 1. The url > > http://www.test.com/schemas > > is a distinct url from > > http://www.test.com/schemas?namespace="http://www.test.com/schema1/ > v1.0" > > and it identifies a different resource. > > > The advantage of 2 is that the schema returns always a schema. > > The disadvantage is that it looks clumsy. > > > > Any suggestions? > > > > Regards, > > Roger van de Kimmenade > > > > > > > > > > > > Yahoo! Groups Links > > > > > > > > > > -- > Hugh > >
Griffin Caprio wrote: > > > I'd like to extend Rogers original question and ask if anyone has any > opinion about utilizing the HEAD verb to return schemas & > representation formats? Anyone have any experiences with it ? HEAD is the same as GET, expect that the response body isn't included. So what exactly are you suggesting?
On 9/10/07, Hugh Winkler <hughw@...> wrote: > On 9/10/07, rogervdkimmenade <rvdkimmenade@...> wrote: > > I have an webservice interface that can return a list of all schemas, > > example: > > GET http://www.test.com/schemas > > return: 200, > > <ul> > > <li> > > <namespace>http://www.test.com/schema1/v1.0</namespace> > > <uri>http://www.test.com/schemas/schema1/v1.0/schema1.xml</uri> > > </li> > > </ul> > > > > However I would like to get a schema based on a namespace. > > I see (at least) 2 solutions: > > > > 1) Using a parameter at the schemas resource > > GET > > http://www.test.com/schemas?namespace="http://www.test.com/schema1/v1.0" > > > > 2) Using a separate resource with parameter > > GET > > http://www.test.com/schemas/schema?namespace="http://www.test.com/schema1/v1.0" > > > > The advantage of 1 is that it seems to be more natural > > The disadvantage is that it can return two different XML formats. > > Without the query parameter it returns a list and with the parameter > > it returns a schema. At least that is what i want it to return. I > > could also go for a list of 1 resource. > > > > > > I don't see any disadvantage to 1. The url > > http://www.test.com/schemas > > is a distinct url from > > http://www.test.com/schemas?namespace="http://www.test.com/schema1/v1.0" > > and it identifies a different resource. > > > ...and following up, I'd eliminate the query parameter altogether. http://www.test.com/schemas => list of schemas http://www.test.com/schemas/percent-encoded-namespace-uri => the schema So caches will cache. Hugh
Hmm, maybe I misunderstood what was meant by "metadata" when i originally read about HEAD. I was originally wondering if anyone used HEAD retrieve metadata about a specific resource, perhaps returning XSD schemas or something. But since HEAD is meant to return HTTP headers, it doesn't seem logical to try and return that type of metadata in a header. That being said, what is the RESTful way, if there is one, to serve up request / response formats? - Griffin On Sep 10, 2007, at 4:23 PM, Julian Reschke wrote: > Griffin Caprio wrote: >> I'd like to extend Rogers original question and ask if anyone has any >> opinion about utilizing the HEAD verb to return schemas & >> representation formats? Anyone have any experiences with it ? > > HEAD is the same as GET, expect that the response body isn't included. > > So what exactly are you suggesting?
On 9/10/07, Hugh Winkler <hughw@...> wrote: > On 9/10/07, Hugh Winkler <hughw@...> wrote: > > On 9/10/07, rogervdkimmenade <rvdkimmenade@...> wrote: > > > I have an webservice interface that can return a list of all schemas, > > > example: > > > GET http://www.test.com/schemas > > > return: 200, > > > <ul> > > > <li> > > > <namespace>http://www.test.com/schema1/v1.0</namespace> > > > <uri>http://www.test.com/schemas/schema1/v1.0/schema1.xml</uri> > > > </li> > > > </ul> > > > > > > However I would like to get a schema based on a namespace. > > > I see (at least) 2 solutions: > > > > > > 1) Using a parameter at the schemas resource > > > GET > > > http://www.test.com/schemas?namespace="http://www.test.com/schema1/v1.0" > > > > > > 2) Using a separate resource with parameter > > > GET > > > http://www.test.com/schemas/schema?namespace="http://www.test.com/schema1/v1.0" > > > > > > The advantage of 1 is that it seems to be more natural > > > The disadvantage is that it can return two different XML formats. > > > Without the query parameter it returns a list and with the parameter > > > it returns a schema. At least that is what i want it to return. I > > > could also go for a list of 1 resource. > > > > > > > > > > I don't see any disadvantage to 1. The url > > > > http://www.test.com/schemas > > > > is a distinct url from > > > > http://www.test.com/schemas?namespace="http://www.test.com/schema1/v1.0" > > > > and it identifies a different resource. > > > > > > > > ...and following up, I'd eliminate the query parameter altogether. > > http://www.test.com/schemas => list of schemas > http://www.test.com/schemas/percent-encoded-namespace-uri => the schema > > So caches will cache. > And for completeness, having reread your original post, I see that the actual schema url is like http://www.test.com/schemas/schema1/v1.0/schema1.xml So revise the above so that: GET http://www.test.com/schemas/http://www.test.com/schema1/v1.0 returns 303 with Location header = http://www.test.com/schemas/schema1/v1.0/schema1.xml (and encode the first url) Think I'm done :)
On 9/10/07, Hugh Winkler <hughw@...> wrote: > On 9/10/07, Hugh Winkler <hughw@...> wrote: > > On 9/10/07, Hugh Winkler <hughw@...> wrote: > > > On 9/10/07, rogervdkimmenade <rvdkimmenade@...> wrote: > > > > I have an webservice interface that can return a list of all schemas, > > > > example: > > > > GET http://www.test.com/schemas > > > > return: 200, > > > > <ul> > > > > <li> > > > > <namespace>http://www.test.com/schema1/v1.0</namespace> > > > > <uri>http://www.test.com/schemas/schema1/v1.0/schema1.xml</uri> > > > > </li> > > > > </ul> > > > > > > > > However I would like to get a schema based on a namespace. > > > > I see (at least) 2 solutions: > > > > > > > > 1) Using a parameter at the schemas resource > > > > GET > > > > http://www.test.com/schemas?namespace="http://www.test.com/schema1/v1.0" > > > > > > > > 2) Using a separate resource with parameter > > > > GET > > > > http://www.test.com/schemas/schema?namespace="http://www.test.com/schema1/v1.0" > > > > > > > > The advantage of 1 is that it seems to be more natural > > > > The disadvantage is that it can return two different XML formats. > > > > Without the query parameter it returns a list and with the parameter > > > > it returns a schema. At least that is what i want it to return. I > > > > could also go for a list of 1 resource. > > > > > > > > > > > > > > I don't see any disadvantage to 1. The url > > > > > > http://www.test.com/schemas > > > > > > is a distinct url from > > > > > > http://www.test.com/schemas?namespace="http://www.test.com/schema1/v1.0" > > > > > > and it identifies a different resource. > > > > > > > > > > > > > ...and following up, I'd eliminate the query parameter altogether. > > > > http://www.test.com/schemas => list of schemas > > http://www.test.com/schemas/percent-encoded-namespace-uri => the schema > > > > So caches will cache. > > > > > And for completeness, having reread your original post, I see that > the actual schema url is like > > http://www.test.com/schemas/schema1/v1.0/schema1.xml > > So revise the above so that: > > GET http://www.test.com/schemas/http://www.test.com/schema1/v1.0 > returns 303 with Location header = > http://www.test.com/schemas/schema1/v1.0/schema1.xml > > (and encode the first url) > > Think I'm done :) > Doggone it. That should be a 302, not 303. As long as I'm filling up people's mailboxes: This is all to tell caches along the way they can respond to the "query" with that cached schema document. The schema document has the canonical url http://www.test.com/schemas/schema1/v1.0/schema1.xml Hugh
> Posted by: "Griffin Caprio" griffin.caprio@... griffinc18 > Mon Sep 10, 2007 2:43 pm (PST) > > Hmm, maybe I misunderstood what was meant by "metadata" when i > originally read about HEAD. > > I was originally wondering if anyone used HEAD retrieve metadata > about a specific resource, perhaps returning XSD schemas or > something. But since HEAD is meant to return HTTP headers, it > doesn't seem logical to try and return that type of metadata in a > header. > > That being said, what is the RESTful way, if there is one, to serve > up request / response formats? Sounds like a job for the OPTIONS method. I've been noodling with the idea of returning a WADL snippet in the response to an OPTIONS method. Essentially you'd return a resource element that describes the supported methods and representation formats. Need to think about it some more but would be interested in folks thoughts about this approach. Marc. --- Marc Hadley <marc.hadley at sun.com> CTO Office, Sun Microsystems.
OPTIONS would be interesting. I've just been using it to return acceptable methods, a la "Allow: Post, Get". I supposed returning a resource would work too. As first thought, something like: http://www.foo.com/<resource>/<action>/<request,response> would return the representation format for a particular actions request or response. But this doesn't feel to RESTy to me. Anyone else? Griffin On Sep 11, 2007, at 8:43 AM, Marc Hadley wrote: > > Posted by: "Griffin Caprio" griffin.caprio@... griffinc18 > > Mon Sep 10, 2007 2:43 pm (PST) > > > > Hmm, maybe I misunderstood what was meant by "metadata" when i > > originally read about HEAD. > > > > I was originally wondering if anyone used HEAD retrieve metadata > > about a specific resource, perhaps returning XSD schemas or > > something. But since HEAD is meant to return HTTP headers, it > > doesn't seem logical to try and return that type of metadata in a > > header. > > > > That being said, what is the RESTful way, if there is one, to serve > > up request / response formats? > > Sounds like a job for the OPTIONS method. I've been noodling with the > idea of returning a WADL snippet in the response to an OPTIONS > method. Essentially you'd return a resource element that describes > the supported methods and representation formats. Need to think about > it some more but would be interested in folks thoughts about this > approach. > > Marc. > > --- > Marc Hadley <marc.hadley at sun.com> > CTO Office, Sun Microsystems. > > >
On Sep 11, 2007, at 12:19 PM, Griffin Caprio wrote:
> OPTIONS would be interesting. I've just been using it to return
> acceptable methods, a la "Allow: Post, Get". I supposed returning
> a resource would work too. As first thought, something like:
>
> http://www.foo.com/<resource>/<action>/<request,response>
>
> would return the representation format for a particular actions
> request or response. But this doesn't feel to RESTy to me.
>
I meant returning a WADL[1] resource description, e.g.:
OPTIONS on http://foo.com/resource would yield a response with the
allow header and the following in the response entity body:
<options xmlns="..." xmlns:foo="...">
<grammars>
<include href="/foo.xsd"/>
</grammars>
<method name="GET">
<request/>
<response>
<representation mediaType="application/xml"
element="foo:resource"/>
<representation mediaType="application/json"/>
</response>
</method>
<method name="PUT">
<request>
<representation mediaType="application/xml"
element="foo:resource"/>
<representation mediaType="application/json"/>
</request>
<response/>
</method>
</options>
A client could then discover that http://foo.com/resource offers
representations as either XML (with a root element of foo:resource
defined in the /foo.xsd schema) or JSON and can accept PUT requests
with entities in the same formats.
Marc
[1] http://wadl.dev.java.net/
---
Marc Hadley <marc.hadley at sun.com>
CTO Office, Sun Microsystems.
Marc - I think GET's more appropriate, because that WADL is, in effect, a form, and forms should be first class hypermedia representations returned by dereferencing a URI. Mark. On 9/11/07, Marc Hadley <hadley@...> wrote: > On Sep 11, 2007, at 12:19 PM, Griffin Caprio wrote: > > > OPTIONS would be interesting. I've just been using it to return > > acceptable methods, a la "Allow: Post, Get". I supposed returning > > a resource would work too. As first thought, something like: > > > > http://www.foo.com/<resource>/<action>/<request,response> > > > > would return the representation format for a particular actions > > request or response. But this doesn't feel to RESTy to me. > > > I meant returning a WADL[1] resource description, e.g.: > > OPTIONS on http://foo.com/resource would yield a response with the > allow header and the following in the response entity body: > > <options xmlns="..." xmlns:foo="..."> > <grammars> > <include href="/foo.xsd"/> > </grammars> > <method name="GET"> > <request/> > <response> > <representation mediaType="application/xml" > element="foo:resource"/> > <representation mediaType="application/json"/> > </response> > </method> > <method name="PUT"> > <request> > <representation mediaType="application/xml" > element="foo:resource"/> > <representation mediaType="application/json"/> > </request> > <response/> > </method> > </options> > > A client could then discover that http://foo.com/resource offers > representations as either XML (with a root element of foo:resource > defined in the /foo.xsd schema) or JSON and can accept PUT requests > with entities in the same formats. > > Marc > > [1] http://wadl.dev.java.net/ > > --- > Marc Hadley <marc.hadley at sun.com> > CTO Office, Sun Microsystems. > > > > > > Yahoo! Groups Links > > > > -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Why not do a GET with an "Accept" header that says I want WADL? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On Sep 11, 2007, at 7:33 PM, Marc Hadley wrote: > On Sep 11, 2007, at 12:19 PM, Griffin Caprio wrote: > > > OPTIONS would be interesting. I've just been using it to return > > acceptable methods, a la "Allow: Post, Get". I supposed returning > > a resource would work too. As first thought, something like: > > > > http://www.foo.com/<resource>/<action>/<request,response> > > > > would return the representation format for a particular actions > > request or response. But this doesn't feel to RESTy to me. > > > I meant returning a WADL[1] resource description, e.g.: > > OPTIONS on http://foo.com/resource would yield a response with the > allow header and the following in the response entity body: > > <options xmlns="..." xmlns:foo="..."> > <grammars> > <include href="/foo.xsd"/> > </grammars> > <method name="GET"> > <request/> > <response> > <representation mediaType="application/xml" > element="foo:resource"/> > <representation mediaType="application/json"/> > </response> > </method> > <method name="PUT"> > <request> > <representation mediaType="application/xml" > element="foo:resource"/> > <representation mediaType="application/json"/> > </request> > <response/> > </method> > </options> > > A client could then discover that http://foo.com/resource offers > representations as either XML (with a root element of foo:resource > defined in the /foo.xsd schema) or JSON and can accept PUT requests > with entities in the same formats. > > Marc > > [1] http://wadl.dev.java.net/ > > --- > Marc Hadley <marc.hadley at sun.com> > CTO Office, Sun Microsystems. > > >
On Sep 11, 2007, at 2:42 PM, Mark Baker wrote: > Marc - I think GET's more appropriate, because that WADL is, in > effect, a form, and forms should be first class hypermedia > representations returned by dereferencing a URI. > Right, there's something meta about WADL that made me think that OPTIONS would be a good choice but I take your point. Marc. > > On 9/11/07, Marc Hadley <hadley@...> wrote: >> On Sep 11, 2007, at 12:19 PM, Griffin Caprio wrote: >> >>> OPTIONS would be interesting. I've just been using it to return >>> acceptable methods, a la "Allow: Post, Get". I supposed returning >>> a resource would work too. As first thought, something like: >>> >>> http://www.foo.com/<resource>/<action>/<request,response> >>> >>> would return the representation format for a particular actions >>> request or response. But this doesn't feel to RESTy to me. >>> >> I meant returning a WADL[1] resource description, e.g.: >> >> OPTIONS on http://foo.com/resource would yield a response with the >> allow header and the following in the response entity body: >> >> <options xmlns="..." xmlns:foo="..."> >> <grammars> >> <include href="/foo.xsd"/> >> </grammars> >> <method name="GET"> >> <request/> >> <response> >> <representation mediaType="application/xml" >> element="foo:resource"/> >> <representation mediaType="application/json"/> >> </response> >> </method> >> <method name="PUT"> >> <request> >> <representation mediaType="application/xml" >> element="foo:resource"/> >> <representation mediaType="application/json"/> >> </request> >> <response/> >> </method> >> </options> >> >> A client could then discover that http://foo.com/resource offers >> representations as either XML (with a root element of foo:resource >> defined in the /foo.xsd schema) or JSON and can accept PUT requests >> with entities in the same formats. >> --- Marc Hadley <marc.hadley at sun.com> CTO Office, Sun Microsystems.
Hello,
Is there a "standard" XML format to get a directory listing with meat
information? For example a microformat
Example:
GET http://www.directory/list
<dir>
<item>
<metainfo>
</metainfo>
<xlink:href>http://www.directory/list/item1.xml</xlink:href>
</item>
</dir>
Thanks
Roger van de Kimmenade
I was thinking: Why not use REST as a simplified messaging system between components? This way a component could be a servlet. Components are also decoupled. Any thoughts? Any experience ? Roger
Hi,
I was listening to Roy Fielding’s “A little REST and Relaxation”
presentation as given at the Jazoon ’07 conference:
http://www.parleys.com/display/PARLEYS/A%20little%20REST%20and%20Relaxation
http://jazoon.com/en/conference/day2.html
At 21:50, he goes into a point that Mike Schinkel made here a
while ago:
> Important to REST was the notion of minimizing coupling between
> systems. There’s a lot of talk in object-oriented language
> research about the importance of minimizing coupling and it’s
> interesting for me sometimes because generally what some people
> do in the language research in terms of coupling is so much
> more extensive than the level of requirements that we had for
> minimizing coupling. We needed a system that could be developed
> independently by 500-1000 different companies, and each of the
> things that they added to the web could be extended and
> deployed independently without affecting anyone else on the
> web, again without actually knowing what those extensions will
> be. And the only way you can do that is completely eliminate
> coupling between clients and servers. The only coupling that
> exists in a REST-based architecture is that the first address
> that you access has… ah… basically a bookmark, and you need to
> keep track of that bookmark. So essentially the rationale
> behind the “Cool URIs Don’t Change” is essentially that’s the
> last remaining bit of coupling in the architectures that are
> based on REST.
So yeah. Decoupling via hypermedia-driven app state does not
extend backward past the beginning of a client-server interaction
in a way that rather reminds me of how causality within this
universe does not extend backward past the big bang pinhole.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
On Sep 18, 2007, at 10:13 AM, A. Pagaltzis wrote: > I was listening to Roy Fieldings A little REST and Relaxation > presentation as given at the Jazoon 07 conference: > > http://www.parleys.com/display/PARLEYS/A%20little%20REST%20and% > 20Relaxation > http://jazoon.com/en/conference/day2.html I wasn't going to point to that one until after my talk today at RailsConf Europe in Berlin. The Jazoon one had to be reduced from 1 hour to 30 minutes (found out the day before), and even then it was far too rushed. I am standing on stage at a stadium-style movie theatre, with a big-screen projection of my slides way over on the right and a projection of what you see on the video above my head. I couldn't see the audience of 600 or so people, and they weren't looking at me anyway, so the whole experience was a bit disjointed. Surprisingly, it sounds much better that what it seemed like while I was on stage, though the audio-leveling takes a bit of the variation out of my voice (i.e., will put you to sleep). But I always hate watching myself on TV. Today's talk should be better. ....Roy
Thanks for sharing. :) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of A. Pagaltzis > Sent: Tuesday, September 18, 2007 4:13 AM > To: rest-discuss > Subject: [rest-discuss] Hypermedia-driven app state, > bookmarks, and coupling in REST > > Hi, > > I was listening to Roy Fielding's "A little REST and Relaxation" > presentation as given at the Jazoon '07 conference: > > > http://www.parleys.com/display/PARLEYS/A%20little%20REST%20and > %20Relaxation > http://jazoon.com/en/conference/day2.html > > At 21:50, he goes into a point that Mike Schinkel made here a > while ago: > > > Important to REST was the notion of minimizing coupling between > > systems. There's a lot of talk in object-oriented language research > > about the importance of minimizing coupling and it's > interesting for > > me sometimes because generally what some people do in the language > > research in terms of coupling is so much more extensive > than the level > > of requirements that we had for minimizing coupling. We needed a > > system that could be developed independently by 500-1000 different > > companies, and each of the things that they added to the > web could be > > extended and deployed independently without affecting > anyone else on > > the web, again without actually knowing what those > extensions will be. > > And the only way you can do that is completely eliminate coupling > > between clients and servers. The only coupling that exists in a > > REST-based architecture is that the first address that you > access has. > > ah. basically a bookmark, and you need to keep track of > that bookmark. > > So essentially the rationale behind the "Cool URIs Don't Change" is > > essentially that's the last remaining bit of coupling in the > > architectures that are based on REST. > > So yeah. Decoupling via hypermedia-driven app state does not > extend backward past the beginning of a client-server > interaction in a way that rather reminds me of how causality > within this universe does not extend backward past the big > bang pinhole. > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> > > > > Yahoo! Groups Links > > >
Hi Roy, * Roy T. Fielding <fielding@...> [2007-09-18 11:20]: > But I always hate watching myself on TV. I thought it came out just fine! > Today's talk should be better. If the radio version was that good, the 12" mix should be fantastic. :-) Do you know if it will it be put online? (Even if you hate seeing yourself on camera – sorry. :-)) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* rogervdkimmenade <rvdkimmenade@...> [2007-09-14 16:20]: > Is there a "standard" XML format to get a directory listing > with meat information? I doubt it. There are much better ways to move this information around, particularly if you’re not sending it over the wire; if you *are* sending it over the wire, then in addition to that you might also want to reexamine what you are trying to achieve. I’d suggest JSON for this particular task, it lends itself to it much more so than XML. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* rogervdkimmenade <rvdkimmenade@...> [2007-09-17 12:55]: > I was thinking: > Why not use REST as a simplified messaging system between > components? This way a component could be a servlet. > Components are also decoupled. > > Any thoughts? Using REST as a model for apps on the system is an idea that goes back quite a bit; you’re not the first to whom this occurs, and it seems quite reasonable. In fact, the core ideas of Unix have a lot of constraints in common with the REST style (a uniform interface, addressability, etc). Unix doesn’t go as far though, and if you go a bit further you stumble into all manner of departures from the “everything is a file” concept like like processes and terminals and ioctls and various kinds of flatfile databases and oh god it’s a tangled mess. That is in part because of a failure of imagination at the time; the original designers just didn’t see how to fit many of the odd-shaped things into the file system. The qmail approach to configuration is an example of how they could have decomposed more of the system in terms of files. So then they made another go in attempt to stick to a radically conservative approach to fix this mess, and what came out of that effort is now known as Plan 9. And that does indeed work out much more nicely. Pity it never really caught on. > Any experience ? I don’t think anyone has used specifically a REST-based architecture to build applications or an operating system. But, as per above, we do have positive experience with systems built to honour *some* of its constraints, and they show that stricter adherence yields better results. So the idea of using REST as a style for more than just web apps is certainly not absurd. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Sep 18, 2007, at 12:53 PM, A. Pagaltzis wrote:
> > Today's talk should be better.
>
> If the radio version was that good, the 12" mix should be
> fantastic. :-) Do you know if it will it be put online?
> (Even if you hate seeing yourself on camera sorry. :-))
I don't know. It is being taped, I think. It starts in an hour.
Jet lag is starting to kick in, so I better get some coffee.
http://roy.gbiv.com/talks/200709_fielding_rest.pdf
....Roy
On 9/14/07, rogervdkimmenade <rvdkimmenade@...> wrote: > Hello, > > Is there a "standard" XML format to get a directory listing with meat > information? This is the only information I could find on XML and the beef industry: http://links.jstor.org/sici?sici=0002-9092(200012)82%3A5%3C1105%3AITACSC%3E2.0.CO%3B2-X Good luck, cowboy!
Alas, just the 400 range. It's those darn 300's that I can never keep track of. http://apelad.blogspot.com/2007/09/suitable-for-printing.html A poster with the full range would be... very RESTful.
Karen: > Alas, just the 400 range. It's those darn 300's that I can never keep > track of. > > http://apelad.blogspot.com/2007/09/suitable-for-printing.html > > A poster with the full range would be... very RESTful. Maybe this list will help, though it's not graphical: http://diveintomark.org/archives/2006/12/07/rest-for-toddlers Andrzej Jan Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com
All, I seem to have lost rational thought at moment and would like some advice. Let's assume a RESTful weather service with the URIs looking something like this: # Daily forecast forecast/2007/10/02 # Today's forecast forecast/today I would expect the URI for today's forecast to redirect to the forecast for today's date. My question is which redirect to use. My first thought is that I would use a 301, but then, the more I thought about it I think a 302/307 is the correct choice. What does the group think? Thanks! Brandon
Brandon Carlson wrote: > > > All, > > I seem to have lost rational thought at moment and would like some > advice. > > Let's assume a RESTful weather service with the URIs looking > something like this: > > # Daily forecast > forecast/2007/ 10/02 > > # Today's forecast > forecast/today > > I would expect the URI for today's forecast to redirect to the > forecast for today's date. My question is which redirect to use. My > first thought is that I would use a 301, but then, the more I > thought about it I think a 302/307 is the correct choice. > > What does the group think? > > Thanks! > Brandon It's a temporary redirect, so 307 would be correct. However, where's the point in forcing the redirect at all? Best regards, Julian
Julian Reschke wrote: > Brandon Carlson wrote: > > > > Let's assume a RESTful weather service with the URIs looking > > something like this: > > > > # Daily forecast > > forecast/2007/ 10/02 > > > > # Today's forecast > > forecast/today > > > > I would expect the URI for today's forecast to redirect to the > > forecast for today's date. My question is which redirect to use. My > > first thought is that I would use a 301, but then, the more I > > thought about it I think a 302/307 is the correct choice. > > > > What does the group think? > > It's a temporary redirect, so 307 would be correct. > > However, where's the point in forcing the redirect at all? > From the POV of a GET a 303 and 307 are very similar. However, a 307 is a "temporary redirect" which to me implies "I've moved the resource over there for a little while, but next time you need it it might be back here." While a 303, "see other," says to me "I understand what you're looking for, and I'm sending you over there to get it." Why redirect? Quoting the RWS book " The 303 status code is a good way to canonicalize your resources. You can make them available through many URIs, but only have one "real" URI per representation. All the other URIs use a 303 to point to the canonical URI for that representation. For instance, a 303 might redirect a request for http://www.example.com/software/current.tar.gz to the URI http://www.example.com/software/1.0.2.tar.gz." From the same source, 302 is to be avoided due to ambiguity issues. -- Pete
On Oct 2, 2007, at 4:16 PM, Peter Lacey wrote: > The 303 status code is a good way > to canonicalize your resources. You can make them available through > many > URIs, but only have one "real" URI per representation. Which of course means an extra client/server roundtrip, which may or may not be acceptable. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Stefan Tilkov wrote: > On Oct 2, 2007, at 4:16 PM, Peter Lacey wrote: > >> The 303 status code is a good way >> to canonicalize your resources. You can make them available through many >> URIs, but only have one "real" URI per representation. > > Which of course means an extra client/server roundtrip, which may or > may not be acceptable. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ It does, but the proper use of HTTP frequently requires multiple trips. Certainly redirects are common enough. And, if you want to treat them differently, there's also a few 2Xx responses after a POST/PUT that might encourage a client to go back to the server. HEAD and OPTION usually imply a future trip to the server, too. Also, in the case of a GET, the first trip is pretty lightweight; headers only. Similarly, form-based links that allow compliance with HATEOAS (God, I hate that term) requires an extra trip. Want to look up a person? Can you construct a link, e.g., http://example.com/users?first_name="Stefan" ? Not without, getting a form first. Finally, 303s mean a cache can store less information, and perform the redirection on the server's behalf. Ultimately, though, circumstances will help you decide whether the expense of a 303 outweighs simply returning the resource. -- Pete
agreed. Because the 303 is already pretty lightweight I think it would be ok. I suppose however that it entirely depends on the client's caching strategy. When recieving a 303, should the client discontinue using the source URI for subsequent calls, if so, for how long? When should it return to the source? In my case, the dates really are a snapshot in time and will never change once created so the target of the redirect can be cached indefinitely. Thanks! brandon On 10/2/07, Stefan Tilkov <stefan.tilkov@...> wrote: > On Oct 2, 2007, at 4:16 PM, Peter Lacey wrote: > > > The 303 status code is a good way > > to canonicalize your resources. You can make them available through > > many > > URIs, but only have one "real" URI per representation. > > Which of course means an extra client/server roundtrip, which may or > may not be acceptable. > > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/
[ Attachment content not displayed ]
On 10/2/07, Nick Gall <nick.gall@...> wrote: > > On 10/2/07, Peter Lacey <placey@...> wrote: > > Why redirect? Quoting the RWS book " The 303 status code is a good way > > to canonicalize your resources. You can make them available through many > > URIs, but only have one "real" URI per representation. All the other > > URIs use a 303 to point to the canonical URI for that representation. > > For instance, a 303 might redirect a request for > > http://www.example.com/software/current.tar.gz to the URI > > http://www.example.com/software/1.0.2.tar.gz ." > > Wouldn't using the Content-Location HTTP header field also be a "good way to canonicalize your resources"? > > > The Content-Location entity-header field MAY be used to supply the resource location for the entity enclosed in the message when that entity is accessible from a location separate from the requested resource's URI. A server SHOULD provide a Content-Location for the variant corresponding to the response entity; especially in the case where a resource has multiple entities associated with it, and those entities actually have separate locations by which they might be individually accessed, the server SHOULD provide a Content-Location for the particular variant which is returned. > > It has the advantage of NOT requiring a round trip. In theory, yes. In practice in the wild, not so much. See this thread; http://lists.w3.org/Archives/Public/ietf-http-wg/2007JulSep/0269.html Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
On 10/2/07, Mark Baker <distobj@...> wrote: > On 10/2/07, Nick Gall <nick.gall@...> wrote: > > Wouldn't using the Content-Location HTTP header field also be a "good way to canonicalize your resources"? > > In theory, yes. In practice in the wild, not so much. Agreed. But I was thinking about "Web API" (programmatic) use of HTTP as opposed to typical browser behavior. As long as one documented one's interface and clients used HTTP libraries with full access to headers, then using Content-Location should be straightforward. Unless intermediaries (eg caches) typically strip such headers in flight. Do they? -- Nick -- Nick Gall Phone: +1.781.608.5871 AOL IM: Nicholas Gall Yahoo IM: nick_gall_1117 MSN IM: (same as email) Google Talk: (same as email) Email: nick.gall AT-SIGN gmail DOT com Weblog: http://ironick.typepad.com/ironick/ Furl: http://www.furl.net/members/ngall
On 10/2/07, Nick Gall <nick.gall@...> wrote: > On 10/2/07, Mark Baker <distobj@...> wrote: > > On 10/2/07, Nick Gall <nick.gall@...> wrote: > > > Wouldn't using the Content-Location HTTP header field also be a "good way to canonicalize your resources"? > > > > In theory, yes. In practice in the wild, not so much. > > Agreed. But I was thinking about "Web API" (programmatic) use of HTTP > as opposed to typical browser behavior. As long as one documented > one's interface and clients used HTTP libraries with full access to > headers, then using Content-Location should be straightforward. Unless > intermediaries (eg caches) typically strip such headers in flight. Do > they? I believe that's what Roy said, yes. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
On Oct 2, 2007, at 3:28 PM, Mark Baker wrote: > On 10/2/07, Nick Gall <nick.gall@...> wrote: > > On 10/2/07, Mark Baker <distobj@...> wrote: > > > On 10/2/07, Nick Gall <nick.gall@...> wrote: > > > > Wouldn't using the Content-Location HTTP header field also be > a "good way to canonicalize your resources"? > > > > > > In theory, yes. In practice in the wild, not so much. > > > > Agreed. But I was thinking about "Web API" (programmatic) use of > HTTP > > as opposed to typical browser behavior. As long as one documented > > one's interface and clients used HTTP libraries with full access to > > headers, then using Content-Location should be straightforward. > Unless > > intermediaries (eg caches) typically strip such headers in > flight. Do > > they? > > I believe that's what Roy said, yes. I meant to say that origin servers sometimes don't know what their own real URI should be due to the presence of intermediaries that rewrite incoming requests. And, because those same intermediaries aren't smart enough to rewrite responses that contain C-Location values (or simply don't know if the origin server already did that for them), the resulting field value is often wrong. That could be solvable using relative values for the location and a better description in the spec. I am not convinced that the 209 code is needed. OTOH, I also heard second-hand complaints from browser implementers that IIS is sending bogus location values by default. ....Roy
* Peter Lacey <placey@...> [2007-10-02 19:35]:
> compliance with HATEOAS (God, I hate that term)
Yeah, me too; which is why a while back I proposed saying
“hypermedia-driven application state” instead, whose initialism
is HDAS, and whose long form rolls of the tongue with nicely
rhythmically if you cut “application” to “app.” People weren’t
very enthusiastic about the idea, though.
So instead I’ve taken to saying “the hypermedia constraint” or
sometimes just “hypermedia” over either “HATEOAS” or using the
full phrase. Eg. in this case it’d be:
Similarly, form-based links that allow compliance with the
hypermedia constraint require an extra trip.
Try that, I find it helps.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
On Oct 2, 2007, at 6:02 PM, A. Pagaltzis wrote: > * Peter Lacey <placey@wanderingbarque.com> [2007-10-02 19:35]: > > compliance with HATEOAS (God, I hate that term) > > Yeah, me too; which is why a while back I proposed saying > hypermedia-driven application state instead, whose initialism > is HDAS, and whose long form rolls of the tongue with nicely > rhythmically if you cut application to app. People werent > very enthusiastic about the idea, though. > > So instead Ive taken to saying the hypermedia constraint or > sometimes just hypermedia over either HATEOAS or using the > full phrase. Eg. in this case itd be: > > Similarly, form-based links that allow compliance with the > hypermedia constraint require an extra trip. The word "hypertext" should have been enough, but it actually means very different things to different people (especially those within the hypertext research community). I can't just shorten it to "hyperstate" either, since that would imply "much more state" (not what we want). It has some aspects in common with "data reactive" engines, as in http://www.cs.uoregon.edu/research/paraducks/papers/tr9605.d/ though I wasn't aware of that paper until 10 minutes ago. Much of the software architecture research on constraints was influenced by the GUI work on constraint-based layout toolkits. I would suggest calling it the "hypertext constraint" when in normal discussion, or the "reactive constraint" when within slapping distance of a literary hypertext fan. ....Roy
[ Attachment content not displayed ]
[ Attachment content not displayed ]
Hey, I have been reading a lot about how the Atom publishing protocol is very extensible and can be used to send any type of XML data that is in the pub-sub model. Lets say I have an XML of some schema that I want to broadcast across nodes using the pub sub model and have decided to use Atom for it. Where do I go from here? Is there any material on what to do, how to do it? The atom-pub IETF draft says only a few paras about what to do, (the small paragraph on Extending ATOM). Does anyone have any whitepaper/notes I could look into? I know of a few simple hacks that could work but then it could go out of atom's restful constraints. I have a few ideas but I am refraining from mentioning them , in a desperate attempt to hide my ignorance and inexperience. Regards, dev
As far as I can see, the Windows Live team have exposed a genuine REST service (and note the use of HTTP Auth rather than cookies) http://dev.live.com/silverlight/api.aspx Just thought I post this, as I haven't seen a reference to this service in the group so far. Regards, Alan Dean http://thoughtpad.net/alan-dean http://simplewebservices.org
[ Attachment content not displayed ]
"Alan Dean" <alan.dean@...> writes: > As far as I can see, the Windows Live team have exposed a genuine REST > service (and note the use of HTTP Auth rather than cookies) > > http://dev.live.com/silverlight/api.aspx > > Just thought I post this, as I haven't seen a reference to this service > in the group so far. It won't last. -- Nic Ferrier http://www.woome.com - Enjoy the minute!
On 10/5/07, Ben Davies <omarshariffdontlikeit@...> wrote:
>
> I can't examine this in more detail myself (I'm not a .NET developer/user) but I couldn't tell from the brief information on the page whether or not it uses "hypertext as the engine of application state" i.e. links in the head/body of the responses to other parts of the system. Can someone confirm or deny this?
>
Not really. That documentation tells you how to construct URLs...
"HTTP request URLs use the following format:
serviceRoot/accountId/fileSetName/fileName?query"
They define <fileSets> and <fileSet> documents, but those deliver you
'fileSetName' attributes, from which you construct URLs to retrieve
the documents. Better would be for them to tell you the URLs.
Hugh
[ Attachment content not displayed ]
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "Colin" == Colin Taylor <colin.taylor@...> writes:
Colin> Any suggestions or comments please. I am having some
Colin> trouble sleeping at night because I'm creating a new
Colin> resource from DELETE. Should I for my sins, or will these
Colin> feelings pass?
You don't. You delete the URL. That's what delete does :-)
That after that there is a new URL is no problem.
On the naming, I would prefer /expired/policy/123.
- --
Live long and prosper,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFHFugtIyuuaiRyjTYRAi9zAJ9CNzocza+xrhwYuVq+UpX4ksfzKQCg1PcE
ojcxnMqTIsKXG+TNB/w8+fM=
=Iwow
-----END PGP SIGNATURE-----
Colin Taylor wrote: > Hi there, > > My application has a policy resource at 'policy/123'. > > When a client DELETEs 'policy/123' I'm actually expiring it on the server by setting the thruDatestamp on the policy table. So I now have a new expired policy resource which will be available to certain clients, but where? Is it best practise to use '/expired/policy/123', with its generic branch for other such deletions? Or, '/policy/123/expired' as a modifier on the original resource? 'expired-policy/123' perhaps? Doesn't really matter. As long as policy/123 returns a 410 or 404 then everything is tickety-boo. I don't like having foo/bar/baz operating when foo/bar isn't, so I'd favour expired/policy/ or expired-policy/ but it's more a matter of taste and intuition than anything else. If one stands out as particularly easy to implement, or easy for clients to deal with, go for that.
[ Attachment content not displayed ]
Hi, I would like to encourage people to talk more of combining the Semantic Web and REST. Much of the research I have read on the Semantic Web is about extending the standard SOAP WS-* stack. This includes WSMO and OWL-S amongst others. My opinion is that one could create the Semantic Web with RESTful Web Services by just adding semantic annotations to existing Web pages. This is an oversimplification I know, but I can't help feeling frustrated about all the effort that are spent on OWL-S and WSMO etc. Initiatives such as OWL-S and WSMO has got to be dead ends, I cannot imagine how anyone would create a service for accessing even 20% of the information being available on the Web today. Adding some semantic tags to regular Web pages however, that one can cope with. RDF is after all, called Resource Description Framework, not Complicated Web Service Description Framework ;) Do you agree?
I have written about Restful semantic web services quite a bit. "Restful semantic web services" http://blogs.sun.com/bblfish/entry/restful_semantic_web_services "Restful web services: the book" http://blogs.sun.com/bblfish/entry/restful_web_services_the_book "foaf and openid" http://blogs.sun.com/bblfish/entry/foaf_openid which would be an example of a really simple "web services". I agree that one should be able to put services together in a much more comprehensible manner by applying rdf restfully to services. Henry Home page: http://bblfish.net/ Sun Blog: http://blogs.sun.com/bblfish/ Foaf name: http://bblfish.net/people/henry/card#me On 23 Oct 2007, at 10:22, erlingwl wrote: > Hi, > > I would like to encourage people to talk more of combining the > Semantic Web and REST. Much of the research I have read on the > Semantic Web is about extending the standard SOAP WS-* stack. This > includes WSMO and OWL-S amongst others. > > My opinion is that one could create the Semantic Web with RESTful Web > Services by just adding semantic annotations to existing Web pages. > This is an oversimplification I know, but I can't help feeling > frustrated about all the effort that are spent on OWL-S and WSMO etc. > > Initiatives such as OWL-S and WSMO has got to be dead ends, I cannot > imagine how anyone would create a service for accessing even 20% of > the information being available on the Web today. Adding some semantic > tags to regular Web pages however, that one can cope with. > > RDF is after all, called Resource Description Framework, not > Complicated Web Service Description Framework ;) > > Do you agree? > > >
On Tue, 2007-10-23 at 08:22 +0000, erlingwl wrote: > I would like to encourage people to talk more of combining the > Semantic Web and REST. Much of the research I have read on the > Semantic Web is about extending the standard SOAP WS-* stack. This > includes WSMO and OWL-S amongst others. > > ... snip .... > > RDF is after all, called Resource Description Framework, not > Complicated Web Service Description Framework ;) > > Do you agree? Yes. In fact it seems that your research has been surprisingly limited - the vast majority of RDF/semweb usage is in a RESTful context. I and many other semweb folk tend not to use the term "web services" for REST systems which could perhaps explain your perception if you searched for semantic web services or something similar. Ian
erlingwl wrote: > My opinion is that one could create the Semantic Web with RESTful Web > Services by just adding semantic annotations to existing Web pages. See also "RDF hyperlinking" http://www.w3.org/2001/sw/Europe/talks/xml2003/Overview-6.html I also touched on the topic briefly in a talk in XTech 2005: http://idealliance.org/proceedings/xtech05/papers/02-07-04/ Cheers, L.
I hope it is ok to post this here. I need some help. I have to implement a REST application that for example, the insert method looks like this: "URL/insert?format=xml&businessID=1234&customerId54321&itemID=123" I don't know where to start, Where do I get the values for businessId and customerId? How do I invoke the script that will handle the logic for this service? Thanks!!!
On 10/25/07, epombar <epombar@...> wrote: > "URL/insert?format=xml&businessID=1234&customerId54321&itemID=123" Ouch. This is not very RESTful, nor very elegant. Where does this "spec" come from? > I don't know where to start, > Where do I get the values for businessId and customerId? > How do I invoke the script that will handle the logic for this service? What technology are you using? It's starting to sound like you're not after REST, but some programming list. This is all "basic programming for web 101". :) Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
Hi,
I have just found that the restlet library breaks down on a misformed
header. When reading
Danny Ayers' foaf file, which has the following headers
hjs@bblfish:0$ curl -I http://dannyayers.com/me.rdf
HTTP/1.1 200 OK
Date: Thu, 25 Oct 2007 12:58:18 GMT
Server: WYMIWYG RWCF (the KnoBot foundation) 0.3
Content-Type: application/rdf+xml
Cache-Control: must-revalidate
Expires: -1
Set-Cookie: lang=; path=/
Pragma: no-cache
Content-Length: 17092
Now I think it is likely, that Expires: -1 is wrong, though I have
not checked the specs. But is it reasonable for the
whole download to stop and throw the following exception?
java.lang.NullPointerException
at org.restlet.util.DateUtils$ImmutableDate.<init>
(DateUtils.java:249)
at org.restlet.util.DateUtils$ImmutableDate.valueOf
(DateUtils.java:234)
at org.restlet.util.DateUtils.unmodifiable(DateUtils.java:195)
at org.restlet.resource.Variant.setExpirationDate
(Variant.java:321)
at com.noelios.restlet.http.HttpClientCall.getResponseEntity
(HttpClientCall.java:251)
at com.noelios.restlet.http.HttpClientConverter.commit
(HttpClientConverter.java:110)
at com.noelios.restlet.http.HttpClientHelper.handle
(HttpClientHelper.java:79)
at org.restlet.Client.handle(Client.java:110)
at org.restlet.Uniform.handle(Uniform.java:97)
at net.java.sommer.addressbook.AgentResourceObject$1
$1.cacheUrlToFile(AgentResourceObject.java:263)
at net.java.sommer.addressbook.AgentResourceObject$1
$1.doInBackground(AgentResourceObject.java:177)
at net.java.sommer.addressbook.AgentResourceObject$1
$1.doInBackground(AgentResourceObject.java:303)
at org.jdesktop.swingworker.SwingWorker$1.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun
(FutureTask.java:269)
at java.util.concurrent.FutureTask.run(FutureTask.java:123)
at org.jdesktop.swingworker.SwingWorker.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask
(ThreadPoolExecutor.java:650)
at java.util.concurrent.ThreadPoolExecutor$Worker.run
(ThreadPoolExecutor.java:675)
at java.lang.Thread.run(Thread.java:613)
Home page: http://bblfish.net/
Sun Blog: http://blogs.sun.com/bblfish/
Foaf name: http://bblfish.net/people/henry/card#me
Heuh. Sorry. That's where I meant to post this. Mail address completion in Apple Mail does not yet implement mind reading technology... Sorry folks, Henry On 25 Oct 2007, at 15:29, Mark Baker wrote: > Restlets have their own mailing list, no?
[ Attachment content not displayed ]
I've hit a conundrum and would like to solicit advice: I am retooling the search functionality of a digital library application and have defined a search URL template (a la CQL or OpenSearch) that allows a human or machine user to express a complex search with a properly formatted URL string. E.g.,: http://example.com/search?query=atom+or+rss+syndication&type=article \ &collection=my_collection&collection=your_collection&start=10&max=40 I take that url string and "normalize" it (lower-case all query terms, alphabetize multiple params, etc) into a data structure such that all equivalent searches will result in the exact same data structure (NOTE that 'start' and 'max' are NOT included in the data structure). I then get the md5 hash of the string representation of that data structure -- this will serve as a key to a cache of searches. I then derive an sql statement from the data structure and perform the search. The result of the search is simply a set of ordered ID numbers for all of the items that match the search. I take that entire id number string and cache it, with the search md5 hash as the key. Next, I apply the "start" and "max" parameters to this string, resulting in my viewable set of items,. which I load up (i.e., grab all of the data for these items) and send back in the response. Note that the original search query is "bookmark-able" and will always return the same response. Embedded in the response are links to the "previous" and "next" set of items (computed based on the "start" and "max" params of the original request). BUT, instead of embedding that entire query string, I can simply embed a URL like: http://example.com/search_hash/56YYYufgsfggfccFF098?start=50&max=40 for the "next" link. Well, I appear to have used a GET request to create a new resource in the search_hash resource 'collection', which I now refer to in the resulting hypertext of that new search. Should the original query have used POST instead of GET? (That seems wrong somehow, and in the case of repeat searches it WON'T create a new resource, but using GET is CLEARLY resulting in a 'side effect', which in not allowed). any advice/observations on this? many thanks- Peter Keane daseproject.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
>>>>> "pkeane" == pkeane littlehat <pkeane@...> writes:
pkeane> Well, I appear to have used a GET request to create a new
pkeane> resource in the search_hash resource 'collection', which I
pkeane> now refer to in the resulting hypertext of that new
pkeane> search. Should the original query have used POST instead
pkeane> of GET? (That seems wrong somehow, and in the case of
pkeane> repeat searches it WON'T create a new resource, but using
pkeane> GET is CLEARLY resulting in a 'side effect', which in not
pkeane> allowed).
pkeane> any advice/observations on this?
First, every language that discusses the notion of side effects will
say it is about "observable" side effects. If a client can't detect
the change, there is no side effect as defined.
The spec says in this:
Naturally, it is not possible to ensure that the server does not
generate side-effects as a result of performing a GET request; in
fact, some dynamic resources consider that a feature. The important
distinction here is that the user did not request the side-effects,
so therefore cannot be held accountable for them.
GET is also defined as idempotent, so repeated calls should give
identical results.
On both properties your behaviour seems fine t me.
- --
Cheesr,
Berend de Boer
PS: This email has been digitally signed if you wonder what the strange
characters are that your email client displays.
PGP public key: http://www.pobox.com/~berend/berend-public-key.txt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 <http://mailcrypt.sourceforge.net/>
iD8DBQFHIq6AIyuuaiRyjTYRAgHHAJ9C5MsBAuBa/aJDsfMPh1IvwEvG/ACgmjAb
ul+LvqBSHtUpb2Te5w4tFC0=
=1USW
-----END PGP SIGNATURE-----
> Well, I appear to have used a GET request to create a new > resource in the search_hash resource 'collection', which I > now refer to in the resulting hypertext of that new search. You didn't create the resource, but by constructing that URI you exposed a resource (where 'resource' is a concept or result-set which may or may not have a representation at this point in time). You can expose resources before they exist - like "tommorow's lottery numbers" - but don't worry about generating the representation on the fly, that's not really 'creating a resource'. (my opinion only, of course) > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of pkeane_littlehat > Sent: Friday, October 26, 2007 9:17 AM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] advice sought re GET vs POST for > RESTful search > > I've hit a conundrum and would like to solicit advice: > > I am retooling the search functionality of a digital library > application and have defined a search URL template (a la CQL or > OpenSearch) that allows a human or machine user to express a > complex search with a properly formatted URL string. E.g.,: > > http://example.com/search?query=atom+or+rss+syndication&type=a > rticle \ > &collection=my_collection&collection=your_collection&start=10&max=40 > > I take that url string and "normalize" it (lower-case all > query terms, alphabetize multiple params, etc) into a data > structure such that all equivalent searches will result in > the exact same data structure (NOTE that 'start' and 'max' > are NOT included in the data structure). I then get the md5 > hash of the string representation of that data structure -- > this will serve as a key to a cache of searches. I then > derive an sql statement from the data structure and perform > the search. The result of the search is simply a set of > ordered ID numbers for all of the items that match the > search. I take that entire id number string and cache it, > with the search md5 hash as the key. Next, I apply the > "start" and "max" parameters to this string, resulting in my > viewable set of items,. which I load up (i.e., grab all of > the data for these items) and send back in the response. > Note that the original search query is "bookmark-able" and > will always return the same response. > > Embedded in the response are links to the "previous" and > "next" set of items (computed based on the "start" and "max" > params of the original request). BUT, instead of embedding > that entire query string, I can simply embed a URL like: > > http://example.com/search_hash/56YYYufgsfggfccFF098?start=50&max=40 > > for the "next" link. > > Well, I appear to have used a GET request to create a new > resource in the search_hash resource 'collection', which I > now refer to in the resulting hypertext of that new search. > Should the original query have used POST instead of GET? > (That seems wrong somehow, and in the case of repeat searches > it WON'T create a new resource, but using GET is CLEARLY > resulting in a 'side effect', which in not allowed). > > any advice/observations on this? > > many thanks- > Peter Keane > daseproject.org > > > > > > > > Yahoo! Groups Links > > >
My 0.02 is that you've built something very smart and not at all in violation of REST principles -- a GET is supposed to be 'safe' in the sense that there is no obligation for the client, and no problem in calling it repeatedly without ill effect. To me, this translates to a 'logical' read. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ On Oct 26, 2007, at 6:17 PM, pkeane_littlehat wrote: > I've hit a conundrum and would like to solicit advice: > > I am retooling the search functionality of a digital library > application and have defined a search URL template (a la CQL or > OpenSearch) that allows a human or machine user to express a complex > search with a properly formatted URL string. E.g.,: > > http://example.com/search?query=atom+or+rss+syndication&type=article \ > &collection=my_collection&collection=your_collection&start=10&max=40 > > I take that url string and "normalize" it (lower-case all query terms, > alphabetize multiple params, etc) into a data structure such that all > equivalent searches will result in the exact same data structure (NOTE > that 'start' and 'max' are NOT included in the data structure). I > then get the md5 hash of the string representation of that data > structure -- this will serve as a key to a cache of searches. I then > derive an sql statement from the data structure and perform the > search. The result of the search is simply a set of ordered ID > numbers for all of the items that match the search. I take that > entire id number string and cache it, with the search md5 hash as the > key. Next, I apply the "start" and "max" parameters to this string, > resulting in my viewable set of items,. which I load up (i.e., grab > all of the data for these items) and send back in the response. Note > that the original search query is "bookmark-able" and will always > return the same response. > > Embedded in the response are links to the "previous" and "next" set of > items (computed based on the "start" and "max" params of the original > request). BUT, instead of embedding that entire query string, I can > simply embed a URL like: > > http://example.com/search_hash/56YYYufgsfggfccFF098?start=50&max=40 > > for the "next" link. > > Well, I appear to have used a GET request to create a new resource in > the search_hash resource 'collection', which I now refer to in the > resulting hypertext of that new search. Should the original query > have used POST instead of GET? (That seems wrong somehow, and in the > case of repeat searches it WON'T create a new resource, but using GET > is CLEARLY resulting in a 'side effect', which in not allowed). > > any advice/observations on this? > > many thanks- > Peter Keane > daseproject.org > > >
* pkeane_littlehat <pkeane@...> [2007-10-27 05:10]: > (That seems wrong somehow, and in the case of repeat searches > it WON'T create a new resource, So it’s idempotent; good. > but using GET is CLEARLY resulting in a 'side effect', which in > not allowed). Actually, they are certainly allowed – otherwise a web server would not be allowed to keep logs, f.ex. But GET is defined to be safe, which means, as Stefan said, that the client is not responsible for any side effects. The server can do anything it wants to handle a GET request, involving any side effects whatsoever. However, there is a clear understanding that a GET request from a client can never be construed as a demand for any of these side effects. The client bears no blame for issuing a GET request that caused the server to do something untoward; if something undesirable happened, it’s the server’s fault. If you are looking at some particular side effect, such as deleting a record or the like, which the client should only request in full knowledge of the consequences, then you require whatever unsafe method is applicable to that side effect. If a bad thing happens, it is then the client’s fault for having asked for it. In your case, the resource creation is incidental, not something the client is even really aware of; if there’s a problem because of that, the blame lies with the server (developer :-) ). So making this a side effect upon GET is perfectly fine. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Original question chopped up: > http://example.com/search?query=atom+or+rss+syndication&type=article \ > &collection=my_collection&collection=your_collection&start=10&max=40 > > ..cache it, with the search md5 hash as the key. > > Embedded in the response are links to the "previous" and "next" set of > items (computed based on the "start" and "max" params of the original > request). BUT, instead of embedding that entire query string, I can > simply embed a URL like: > > http://example.com/search_hash/56YYYufgsfggfccFF098?start=50&max=40 > > for the "next" link. > > Well, I appear to have used a GET request to create a new resource in > the search_hash resource 'collection', which I now refer to in the > resulting hypertext of that new search. Should the original query > have used POST instead of GET? (That seems wrong somehow, and in the > case of repeat searches it WON'T create a new resource, but using GET > is CLEARLY resulting in a 'side effect', which is not allowed). > This is also a great question for those of us in the sub-cult of RESTianism (! :-) who prefer opaque URIs where possible. Of course, the answers you already got are totally correct, and you should follow their direction - if you don't want URI opacity. Why you may like URI opacity is a separate discussion, which I'd be happy to address if you're interested. The approach below is simply suggested to round out the discussion... Those of us in this opaque-URI cult would, indeed, have you POST your original query. In general, you would POST your query to the (opaque!) URI of the collection you're querying. This collection then returns a redirect to a results resource that it has set up for you. In your case, that would be easy - just redirect after the POST to your cache URI: http://example.com/search_hash/56YYYufgsfggfccFF098?start=0&max=40 The results here can either be tailored to this client in some way, or be a cache of this specific query for everyone to use. Further, it may either be updated as the possible search results change, or stay as a constant, eternally-cacheable snapshot. Which of these can perhaps be set as an additional argument in the original POST query body. Of course, this resource can have suitable headers sent along with it, optimised to these particular dynamic and/or personalised characteristics. You may need to avoid nervous proxy-cache behaviour around query URIs by chunking up your results, after all, in chunks of 40 with their own URIs, or, again, in chunks of a size asked for in the original query. Note that a GET on the collection URI should return a content-type that implies this URIs ability to accept such query POSTs. For browsers, that'd be a query form. For server-server REST integration, we can talk on this list about defining generic a query protocol standard for collections... Of course, the redirect is an extra call - amounting to two (probably single-packet) network latencies. I'd be interested in the opinions of this list on whether it is OK to optimise away this round-trip, by putting the results in the POST response and also putting the opaque URI in the Content-Location header. Unsavvy clients would redirect to the Location, savvy clients would note the Location/Content-Location for future accesses, but take the body directly, for now. I'd also be interested to know how big this particular REST sub-cult is - I hope it's not just me! =0) Cheers! Duncan Cragg
Hi Duncan, * Duncan Cragg <rest-discuss@...> [2007-10-29 16:35]: > Those of us in this opaque-URI cult would, indeed, have you > POST your original query. I don’t follow: why is URI opaqueness here predicated on the use of POST? > In general, you would POST your query to the (opaque!) URI of > the collection you're querying. This collection then returns a > redirect to a results resource that it has set up for you. You can just as well return a redirect for GETs, no? > I'd be interested in the opinions of this list on whether it is > OK to optimise away this round-trip, by putting the results in > the POST response and also putting the opaque URI in the > Content-Location header. Unsavvy clients would redirect to the > Location, savvy clients would note the > Location/Content-Location for future accesses, but take the > body directly, for now. You mean you put the body of the redirect target URI in the redirect response body, and add a Content-Location with the same value as the Location header? If so, I guess that is in fact a clever idea. I just don’t know if *any* client at all is going to be smart enough to catch this sort of equivalence. I’m also unsure about how this plays with the question of headers belonging to the redirect vs. to the resource at the target URI. It might be unavoidable for clients that are smart enough to still issue a HEAD or conditional GET for the redirect target to ensure correctness of the resource metadata they have stored. If that turns out to be true then you gain nothing because it’s two roundtrips anyhow. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> This is also a great question for those of us in the sub-cult of > RESTianism (! :-) who prefer opaque URIs where possible. Of course, > the answers you already got are totally correct, and you should follow > their direction - if you don't want URI opacity. Why you may like URI > opacity is a separate discussion, which I'd be happy to address if > you're interested. The approach below is simply suggested to round out > the discussion... I have heard of this cult. I am told they are responsible for the strange chants and incantations heard emanating from the back room at our regular REST ceremonies. It's been said if you enter that room you will never return.... > (...) > The results here can either be tailored to this client in some way, or > be a cache of this specific query for everyone to use. Further, it > may either be updated as the possible search results change, or stay > as a constant, eternally-cacheable snapshot. Which of these can > perhaps be set as an additional argument in the original POST query > body. Of course, this resource can have suitable headers sent along > with it, optimised to these particular dynamic and/or personalised > characteristics. You may need to avoid nervous proxy-cache behaviour > around query URIs by chunking up your results, after all, in chunks of > 40 with their own URIs, or, again, in chunks of a size asked for in > the original query. You may have answered another, related issue here that's been on my mind regarding personalization. The application now either: 1. retrieves any "personalized" data as a separate XMLHTTTPRequest call and updates the DOM with that info (OK, so I have a cookie that holds the current user's username that allows me to create a user-specific Ajax GET request -- a venial REST sin, I think...), or 2. the user's username is part of the URL iteslf and creates, essentially, a custom web application for this user since all URLs include the username (I use this for the administrative side where the user is "logged in" as an admin and re-use of the same queries is not as important as in the read-only view). BUT...If I include the username as a query parameter and, in essence create new resources tailored to this user, I could switch back and forth between personalized and non-personalized interactions fairly easily. As you say, these should be POSTS (although I note that Aristotle suggested in his reply to your post that GET would be OK there, at least for search). Hmmm...am I wading into dangerous waters if I start creating these "idempotent" side-effects not for caching purposes, but for personalization? I like the idea of keeping these as GET requests since that will in almost all cases be the result of clicking a link. -Peter Keane http://blogs.law.harvard.edu/pkeane/ (<--brand new blog) > (...) > I'd also be interested to know how big this particular REST sub-cult > is - I hope it's not just me! =0) > > Cheers! > > Duncan Cragg >
> > This is also a great question for those of us in the sub-cult of > > RESTianism (! :-) who prefer opaque URIs where possible. Of course, > > the answers you already got are totally correct, and you should follow > > their direction - if you don't want URI opacity. Why you may like URI > > opacity is a separate discussion, which I'd be happy to address if > > you're interested. The approach below is simply suggested to round out > > the discussion... > > I have heard of this cult. I am told they are responsible for the strange > chants and incantations heard emanating from the back room at our regular > REST ceremonies. It's been said if you enter that room you will never > return.... Yup, that's us. You should drop in... [Mmmwwaaaaahhaaaahaaa!] > > The results here can either be tailored to this client in some way, or > > be a cache of this specific query for everyone to use. Further, it > > may either be updated as the possible search results change, or stay > > as a constant, eternally-cacheable snapshot. Which of these can > > perhaps be set as an additional argument in the original POST query > > body. Of course, this resource can have suitable headers sent along > > with it, optimised to these particular dynamic and/or personalised > > characteristics. You may need to avoid nervous proxy-cache behaviour > > around query URIs by chunking up your results, after all, in chunks of > > 40 with their own URIs, or, again, in chunks of a size asked for in > > the original query. > > You may have answered another, related issue here that's been on my mind > regarding personalization. The application now either: 1. retrieves any > "personalized" data as a separate XMLHTTTPRequest call and updates the DOM > with that info (OK, so I have a cookie that holds the current user's > username that allows me to create a user-specific Ajax GET request -- a > venial REST sin, I think...), or 2. the user's username is part of the URL > iteslf and creates, essentially, a custom web application for this > user since all URLs include the username (I use this for the > administrative side where the user is "logged in" as an admin and re-use > of the same queries is not as important as in the read-only view). I'd like whoever thinks user-specific requests are a venial REST sin to explain why! :-) Whether triggered by an Auth or a Cookie header, you are allowed to either Vary or redirect. You lose cacheability for everyone - what a surprise! It's just for one user! The browser can still cache it, of course. Auth/Cookie Variation amounts, in effect, to adding the user id to the URL anyway: your second option, modulo the fact that in the first case, sharing the URI gives a different result (surprise!) and in the second case, they're not able to see it, or get redirected to their own resource anyway.. The sin is to personalise resources that needn't be - losing linking and cacheing opportunities. > As you say, these should be POSTS (although I note that Aristotle > suggested in his reply to your post that GET would be OK there, at least > for search). I wouldn't say 'should' (the other responses were quite valid, RESTfully). I prefer to POST queries, firstly because the opacity tenet of my bizarre cult means I can't have GETs on transparent queries in the URI; plus it does fit better with the fact that you're triggering the creation of a new resource and with the fact that what you're asking for is more about /you/ than about the collection - the collection's never seen that query URI before - 'you said it, not me!'. Finally, and we're getting into the more pragmatic reasons for URI opacity, putting the query template into a POST body is more flexible and more standardisable: you can talk about content/MIME types instead of URI templates. URI templates feel more like tunneling a schema through a limited length, limited charset, single-line string. They actually feel like that (definitely non-REST) evil: tunnelled function calls! It's a slippery slope.. > .. I like the idea of keeping these as GET requests since > that will in almost all cases be the result of clicking a link. Anything that supplies a link can supply the opaque one, especially if it's a link back to the same site. Other sites can do the query POST prior to building the page with the opaque link in it - either automatically or manually. Cheers! Duncan
Hi all, I'm designing an API which exposes data like companies, products, etc.. To access company-data the API defines URLs like this "/companies/<id>", and for product-data URLs look like this "/products/<id>". This way I can search for companies with a URL like this "/companies?name=<pattern>" or for products with URLs: "/products?name=<pattern>". Then I thought, products actually belong to the companies producing them. So I came up with this product-URL: "/companies/<comp-id>/products/<prod-id>". But here's my problem now: how should a search be designed which is to search among all products of all companies - a global product search that is? "/companies/<comp-id>/products?name=<pattern>" doesn't work globally - at least it shouldnt. Am I to expose something like "/search?type=products&name=<pattern>" or are there reasonable alternatives? Or am I better off with the first approach? How would you design such an API? Any hints are highly appreciated! Thanks, Stefan
On Oct 30, 2007, at 5:54 PM, Stefan Hbner wrote:
> Hi all,
>
> I'm designing an API which exposes data like companies, products,
> etc..
>
> To access company-data the API defines URLs like this
> "/companies/<id>", and for product-data URLs look like this
> "/products/<id>". This way I can search for companies with a URL like
> this "/companies?name=<pattern>" or for products with URLs:
> "/products?name=<pattern>".
>
> Then I thought, products actually belong to the companies producing
> them. So I came up with this product-URL:
> "/companies/<comp-id>/products/<prod-id>".
>
> But here's my problem now: how should a search be designed which is to
> search among all products of all companies - a global product search
> that is? "/companies/<comp-id>/products?name=<pattern>" doesn't work
> globally - at least it shouldnt.
>
> Am I to expose something like "/search?type=products&name=<pattern>"
> or are there reasonable alternatives?
>
> Or am I better off with the first approach?
>
> How would you design such an API?
>
>
I see nothing wrong with having both options -- i.e. /products is the
collection resource for all products, and /companies/{id}/products is
the company-specific one. I assume that in both cases, you'd return a
list of links.
Beware of over-emphasizing URI meaning, though -- it's nice to have
readable URIs, but they shouldn't "leak" into your design.
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
> Any hints are highly appreciated! Thanks,
> Stefan
>
>
On 30/10/2007, Stefan Tilkov <stefan.tilkov@...> wrote:
[snip]
> I see nothing wrong with having both options -- i.e. /products is the
> collection resource for all products, and /companies/{id}/products is
> the company-specific one. I assume that in both cases, you'd return a
> list of links.
OK. So /products would return e.g. XML like
<products>
<product xlink:href="/companies/1/products/1" />
<product xlink:href="/companies/1/products/2" />
<product xlink:href="/companies/2/products/1" />
<product xlink:href="/companies/3/products/1" />
... and so on, right?
On my way home just after posting my my first mail, I realized I
should look at the application from a Website-point-of-view. Meaning:
there can be several lists of similar things in different places of
the application. As long as they link to valid documents and clients
get what they expect when traversing those links.
> Beware of over-emphasizing URI meaning, though -- it's nice to have
> readable URIs, but they shouldn't "leak" into your design.
Could you please explain your point a bit?
On 10/30/07, Stefan Hbner <sthuebner@...> wrote: > Hi all, > > I'm designing an API which exposes data like companies, products, etc.. > > To access company-data the API defines URLs like this > "/companies/<id>", and for product-data URLs look like this > "/products/<id>". This way I can search for companies with a URL like > this "/companies?name=<pattern>" or for products with URLs: > "/products?name=<pattern>". > > Then I thought, products actually belong to the companies producing > them. So I came up with this product-URL: > "/companies/<comp-id>/products/<prod-id>". > > But here's my problem now: how should a search be designed which is to > search among all products of all companies - a global product search > that is? "/companies/<comp-id>/products?name=<pattern>" doesn't work > globally - at least it shouldnt. > > Am I to expose something like "/search?type=products&name=<pattern>" > or are there reasonable alternatives? > > Or am I better off with the first approach? > > > How would you design such an API? > You don't say this explicitly, but it sounds like "designing an API" means, you are designing a formula that programmers of client programs can use to construct a query url. If that is what you mean, well, I wish we'd do less of that. The form of a query url should only be important to the server implementor. The server should deliver an HTML form or Xform or some such, that tells the client to fill in some blanks, and then using the rules of HTML forms, the client composes the URL. When you do that, it won't matter if you decide for queries the one way, then three weeks later decide to do it the other way, or some third way. If this is machine to machine communication, it amounts to telling developers to look for <input> elements named "product-id" and to form the url according the rules of the forms language, e.g. append "?name=value..." to the form's action attribute. I admit I'm not sounding very convincing with this argument, because I'm asking you a) to have clients make an extra GET first to retrieve the form, and b) they still have to have some foreknowledge of the element name attributes they should understand. It does however insulate them from having to understand the form of your urls. I guess I'm putting this out here for discussion, and not really suggesting you implement it this way. But if anyone out there thinks there's a good way to do what I'm trying to do, which is to drive the interaction through forms, please jump in. Hugh
On Oct 31, 2007, at 11:35 PM, Stefan Hbner wrote:
> On 30/10/2007, Stefan Tilkov <stefan.tilkov@...> wrote:
> [snip]
> > I see nothing wrong with having both options -- i.e. /products is
> the
> > collection resource for all products, and /companies/{id}/products
> is
> > the company-specific one. I assume that in both cases, you'd
> return a
> > list of links.
>
> OK. So /products would return e.g. XML like
> <products>
> <product xlink:href="/companies/1/products/1" />
> <product xlink:href="/companies/1/products/2" />
> <product xlink:href="/companies/2/products/1" />
> <product xlink:href="/companies/3/products/1" />
>
> ... and so on, right?
>
>
Yes, or it could be
<product xlink:href="/products/1" />
or
<product xlink:href="/AD5EFFAE132865DDE" />
or whatever.
> On my way home just after posting my my first mail, I realized I
> should look at the application from a Website-point-of-view. Meaning:
> there can be several lists of similar things in different places of
> the application. As long as they link to valid documents and clients
> get what they expect when traversing those links.
>
Exactly!
>
>
> > Beware of over-emphasizing URI meaning, though -- it's nice to have
> > readable URIs, but they shouldn't "leak" into your design.
>
> Could you please explain your point a bit?
>
It's tempting to start first REST projects with obsessive URI design,
but as Hugh pointed out by now, too, client-side URI construction is
always a bad sign (unless the client has received instructions on how
to construct the URI dynamically).
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
[ Attachment content not displayed ]
* Hugh Winkler <hughw@...> [2007-11-01 12:02]: > I admit I'm not sounding very convincing with this argument, > because I'm asking you a) to have clients make an extra GET > first to retrieve the form, and b) they still have to have some > foreknowledge of the element name attributes they should > understand. It does however insulate them from having to > understand the form of your urls. > > I guess I'm putting this out here for discussion, and not > really suggesting you implement it this way. I *would* suggest that it is implemented that way. It’s the hypermedia constraint. Truly RESTful apps are – by definition![1] – designed around the formats of the representations exchanged, not around the structure of the URIs used. Take a look at Atompub for an idea of how that looks in practice. > But if anyone out there thinks there's a good way to do what > I'm trying to do, which is to drive the interaction through > forms, please jump in. I think what you described is just fine. [1] Remember what “ReST” means… Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hugh Winkler wrote: > I admit I'm not sounding very convincing with this argument, because > I'm asking you a) to have clients make an extra GET first to retrieve > the form, and b) they still have to have some foreknowledge of the > element name attributes they should understand. It does however > insulate them from having to understand the form of your urls. > > I guess I'm putting this out here for discussion, and not really > suggesting you implement it this way. But if anyone out there thinks > there's a good way to do what I'm trying to do, which is to drive the > interaction through forms, please jump in. Yeah, not sounding terribly convincing to me. I do think your described solution is fine, as long as there really isn't a need for 'machine to machine' communication. Actually, let's discuss this 'machine to machine' communication. I think what you're actually talking about is 'all other forms of client-server interaction for this application other than an HTML page'. I mention this because I get this sense that people think 'machine to machine' communication is bad, when in fact, it's quite good. The more clients, the merrier. I think once you've decided that you'd like to open your 'server application' up to more than just the HTML pages you provide, you really do need to describe your REST APIs *somehow* (the urls, and the data flowing over the wire). We'll obviously never agree on *how*, I'm not going to go so far as to suggest that's possible. :-) Hugh's right that you can glean some of this information from a plain-old HTML form. Perhaps lightly enhanced via MicroFormats or some such, a form really could serve as a per-API meta-description of the API itself. Interesting idea. On the other hand, pretend that you've actually described your APIs (URLs and the data that flows through them) in some kind of program-readable fashion. From this, it would be easy to dynamically build a form, in JS on the client. Is there a difference? -- Patrick Mueller http://muellerware.org
Duncan Cragg wrote: > This is also a great question for those of us in the sub-cult of > RESTianism (! :-) who prefer opaque URIs where possible. URIs simply *are* opaque, whether we like that or not. > Those of us in this opaque-URI cult would, indeed, have you POST your > original query. In general, you would POST your query to the (opaque!) > URI of the collection you're querying. I disagree. Somewhere the client will have to have it "explained" to it how to either construct the URI or how to construct the representation it will POST to the URI (and will have to be told that URI as well). If the information comes from a representation (an HTML form is an example of how this could happen for either GET or POST) then the constraint of hypermedia as engine is fulfilled, otherwise it isn't. This is orthogonal to reasons for chosing GET or POST.
Jon Hanna: | Duncan Cragg wrote: | > This is also a great question for those of us in the sub-cult of | > RESTianism (! :-) who prefer opaque URIs where possible. | | URIs simply *are* opaque, whether we like that or not. No, they're not. Apart from the obvious things such as scheme and host for http uri's, which can reliably derived from uri's there are other uses mentioned in http://www.w3.org/2001/tag/doc/metaDataInURI-31.html As a human, I can guess from http://example.org/weather/Chicago that I may try to find the weather for Boston (where I'll be soon) in http://example.org/weather/Boston . And as a uri assigner I may publish http://www.marcdegraauw.com/bestbeerbars/Amsterdam and publish a spec telling the world to fill in any Dutch city for 'Amsterdam' and find the best beer bars there. Whether uri's are opaque or not is a design choice made by the assigner, not an inherent property of uri's. Marc de Graauw http://www.marcdegraauw.com
Marc de Graauw wrote: > Jon Hanna: > > | Duncan Cragg wrote: > | > This is also a great question for those of us in the sub-cult of > | > RESTianism (! :-) who prefer opaque URIs where possible. > | > | URIs simply *are* opaque, whether we like that or not. > > No, they're not. Apart from the obvious things such as scheme and host for http > uri's, which can reliably derived from uri's there are other uses mentioned in > http://www.w3.org/2001/tag/doc/metaDataInURI-31.html All of which are specific uses at specific points. > As a human, I can guess from http://example.org/weather/Chicago that I may try > to find the weather for Boston (where I'll be soon) in > http://example.org/weather/Boston . And it may or may not work. All a processor *knows* about the URI http://example.org/weather/Chicago is: 1. That it identifies the same resource as another URI which is character-for-character identical to that. 2. Specific interpretation of specific components of the URI that it is its task to deal with. > Whether uri's are opaque or not is a design choice made by the assigner, > not an inherent property of uri's. No, no matter how you design it, a processor does not know more about the meaning of a URI than it is intended to know. It may be its job to take the "http" portion and say "this gets dealt with by HTTP". It may be it's job to take the domain and do a look up on it. It may be it's job to take the pattern weather/(.*) and do a look up for a representation of the weather in the locale with the name that matches the final bit. It may be its job simply to see if it is a character-for-character match for a previously seen URI. Anything else in the URI is unknown to the processor. That is, it is opaque. You can't "design" something as opaque. http://example.net/tghyuioordsx/dsfyijalsd/sdxfuiuwe is no more or less opaque than http://example.net/weather/Dublin. It's probably more obscure (though not if http://example.net/weather/Dublin returns some recommended low-cholesterol fish recipes) but obscurity is quite different to opacity.
Jon Hanna | Marc de Graauw wrote: | > Jon Hanna: | > | > | Duncan Cragg wrote: | > | > This is also a great question for those of us in the sub-cult of | > | > RESTianism (! :-) who prefer opaque URIs where possible. | > | | > | URIs simply *are* opaque, whether we like that or not. | > | > No, they're not. Apart from the obvious things such as | scheme and host for http | > uri's, which can reliably derived from uri's there are | other uses mentioned in | > http://www.w3.org/2001/tag/doc/metaDataInURI-31.html | <http://www.w3.org/2001/tag/doc/metaDataInURI-31.html> | | All of which are specific uses at specific points. | | > As a human, I can guess from | http://example.org/weather/Chicago | <http://example.org/weather/Chicago> that I may try | > to find the weather for Boston (where I'll be soon) in | > http://example.org/weather/Boston | <http://example.org/weather/Boston> . | | And it may or may not work. | | All a processor *knows* about the URI | http://example.org/weather/Chicago | <http://example.org/weather/Chicago> is: | 1. That it identifies the same resource as another URI which is | character-for-character identical to that. | 2. Specific interpretation of specific components of the URI | that it is | its task to deal with. | | > Whether uri's are opaque or not is a design choice made by | the assigner, | > not an inherent property of uri's. | | No, no matter how you design it, a processor does not know more about | the meaning of a URI than it is intended to know. | | It may be its job to take the "http" portion and say "this gets dealt | with by HTTP". It may be it's job to take the domain and do a | look up on | it. It may be it's job to take the pattern weather/(.*) and | do a look up | for a representation of the weather in the locale with the name that | matches the final bit. It may be its job simply to see if it is a | character-for-character match for a previously seen URI. | | Anything else in the URI is unknown to the processor. That | is, it is opaque. I don't get this. Is the processor in your example server-side or client-side? If server-side, then it's all true what you write, but uri opacity is a client side issue. The server may do whatever it pleases with a uri. If you mean client-side, then the "(.*)" in "pattern weather/(.*)" isn't opaque, is it? Point is, any uri assigner, including me, may specify how it's uri's are constructed, and any client (human or software) may use those specs to construct or deconstruct uri's. And if someone publishes such a spec, the uri's are no longer opaque. Regards, Marc de Graauw http://www.marcdegraauw.com
Marc de Graauw wrote:
> I don't get this. Is the processor in your example server-side or client-side?
Doesn't matter. Indeed, one of the cases where opacity is of most
importance is when a proxy cache is deciding whether a request can be
served from the cached response to a previous request - in which case it
is both a server (to the requesting client) and a client (to the origin
server).
> Point is, any uri assigner, including me, may specify how it's uri's are
> constructed, and any client (human or software) may use those specs to construct
> or deconstruct uri's. And if someone publishes such a spec, the uri's are no
> longer opaque.
Most likely part of it still will be.
Your spec is going to be a description of how a particular hypermedia
format is to be interpreted, including how instructions contained in
that document for URI construction are to be constructed.
Now, when the client receives the document it will act upon at least
part of it will be opaque to it. Consider:
<form action="http://example.net/weather">
<p>
<label for="city">Select City:</label>
<select name="city" id="city">
<option value="Chicago">Chicago</option>
<option value="Boston">Boston</option>
</select>
<input type="submit" value="Get Weather Report" />
</p>
</form>
This would construct the URI http://example.net/weather?city=Chicago or
the URI http://example.net/weather?city=Boston
In either case the portion http://example.net/weather?city= is just an
from opaque strings passed to it by the server. The portion Chicago or
Boston are a choice of two opaque strings passed to it by the server.
Functionally the above form is the same as:
<form action="https://example.org/jabberwock">
<p>
<label for="city">Select City:</label>
<select name="slithy" id="city">
<option value="outgrabe">Chicago</option>
<option value="chortle">Boston</option>
</select>
<input type="submit" value="Get Weather Report" />
</p>
</form>
I am just starting with REST and don't know which is the proper way to model this. I have a resource "booking" that can be cancelled. Cacellation just sets a flag to true in the server. If after cancellation the resource is no longer GETable, I think the interface would be DELETE + 410 Gone afterwards. That sounds right? But if cancelled bookings are GETable which interface would you provide? According to the RFC PUT requests should send the state of objetcs, but changing this flag to me sounds different than changing regular attributes of the booking as "number of rooms". If the resource was an invoice with a flag paid/unpaid it sounds strange to require the entire invoice through PUT. I think the situation is analogous here. In general I think I want to know how do you model "state" flags. Sometimes they aren't even setable by the client. -- fxn
Hi all, This issue in Restlet in now fixed in SVN and will be part of the upcoming 1.0.6 release. Best regards, -- Jerome Louvel http://www.restlet.org Alan Dean wrote : > > Henry, > > I agree that an Expires: -1 *ought* to be invalid. > > However ... I know for sure that unfortunately the ASP.NET > <http://ASP.NET> runtime emits exactly this when you set Cache-Control > to must-revalidate (and in other cases too) so it's going to be found in > the wild, for sure. Unfortunately the guys at MS seemingly didn't read > the spec [1] which stipulates that "To mark a response as "already > expired," an origin server sends an Expires date that is equal to the > Date header value." > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21 > <http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21> > > Regards, > Alan Dean
* Xavier Noria <fxn@...> [2007-11-02 13:00]: > If after cancellation the resource is no longer GETable, I > think the interface would be DELETE + 410 Gone afterwards. > That sounds right? > > But if cancelled bookings are GETable which interface would you > provide? DELETE + 301, I guess. > In general I think I want to know how do you model "state" > flags. Sometimes they aren't even setable by the client. If you have flags other than some kind of “active/not active” status, probably by exposing them as separate resources for the client to PUT to, when it is allowed to. And remember that the server does not need to store a bit-for-bit copy of what the client sends in PUT. If the server includes those flags in the representation it returns on GET, that doesn’t mean clients have to be able to change them. If the client includes them in a PUT the server can simply pretend they weren’t there or 400 the request or do whatever. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Nov 2, 2007, at 5:09 PM, A. Pagaltzis wrote:
> * Xavier Noria <fxn@hashref.com> [2007-11-02 13:00]:
> > If after cancellation the resource is no longer GETable, I
> > think the interface would be DELETE + 410 Gone afterwards.
> > That sounds right?
> >
> > But if cancelled bookings are GETable which interface would you
> > provide?
>
> DELETE + 301, I guess.
>
With that apporach we would have for example
DELETE /bookings/{id}
...
301 Moved Permanently
Location: /cancelled_bookings/{id}
and /bookings/{id} wouldn't be available.
> > In general I think I want to know how do you model "state"
> > flags. Sometimes they aren't even setable by the client.
>
> If you have flags other than some kind of active/not active
> status, probably by exposing them as separate resources for the
> client to PUT to, when it is allowed to.
>
That would be: I have a resource "budget", and to "approve" it I send
its representation via PUT to /budgets/approved/{id}, where the ID is
the one of the original resource?
Wouldn't be a RESTful way to express this RPC-style action
Set the approved flag of budget with URL foo to "approved"
? That is you pass the URL, not the representation.
> And remember that the server does not need to store a bit-for-bit
> copy of what the client sends in PUT. If the server includes
> those flags in the representation it returns on GET, that doesnt
> mean clients have to be able to change them. If the client
> includes them in a PUT the server can simply pretend they werent
> there or 400 the request or do whatever.
>
Yeah thank you.
My question in that regard was more having to do with the possibility
of dealing with "cancelled" as a non-distinguished attribute. In that
case you'd use a PUT request to its URL with the entire representation
and the service would have separate logic when that particular
attribute changed. That smells like a wrong approach to me for
attributes like these ones which are kind of metadata so to speak.
-- fxn
* Xavier Noria <fxn@...> [2007-11-02 17:35]:
> On Nov 2, 2007, at 5:09 PM, A. Pagaltzis wrote:
>> * Xavier Noria <fxn@...> [2007-11-02 13:00]:
>> > But if cancelled bookings are GETable which interface would
>> > you provide?
>>
>> DELETE + 301, I guess.
>>
> With that apporach we would have for example
>
> DELETE /bookings/{id}
> ...
> 301 Moved Permanently
> Location: /cancelled_bookings/{id}
>
> and /bookings/{id} wouldn't be available.
I meant that you would return 301 for subsequent requests to
/bookings/{id} – not in response to the DELETE itself.
A DELETE doesn’t have to result in any particular status on
subsequent requests – it just would be deceitful for the server
to return 2xx if it doesn’t plan to do *some*thing in response
to the request.
But per RFC 2616 the response to the DELETE *itself* SHOULD be
either 200, 202 or 204. So in your case the best approach would
be to return 200 and put a representation with a link to the new
address in the response body.
>> > In general I think I want to know how do you model "state"
>> > flags. Sometimes they aren't even setable by the client.
>>
>> If you have flags other than some kind of “active/not active”
>> status, probably by exposing them as separate resources for
>> the client to PUT to, when it is allowed to.
>
> That would be: I have a resource "budget", and to "approve" it
> I send its representation via PUT to /budgets/approved/{id},
> where the ID is the one of the original resource?
No, that would mean you want to store the representation of that
budget in /budgets/approved/{id} – which isn’t what you’re after.
Rather, you would PUT a body like <approval>approved</approval>
to /budgets/{id}/approved.
(Actually, you would PUT it to whichever approval URI the
resource at /budgets/{id} links to. The client doesn’t
spontaneously construct URIs, it only follows links and forms.)
> Wouldn't be a RESTful way to express this RPC-style action
>
> Set the approved flag of budget with URL foo to "approved"
>
> ? That is you pass the URL, not the representation.
That would work as well – POST <budget href="/budget/{id}"/> to
/approved_budgets. Or budget_uri=/budget/{id} I suppose. (Query
string format.)
It’s hard to say which of these options is better, because we’re
talking in pretty abstract terms. More specifics would be
necessary to make a better call.
> My question in that regard was more having to do with the
> possibility of dealing with "cancelled" as a non-distinguished
> attribute. In that case you'd use a PUT request to its URL
> with the entire representation and the service would have
> separate logic when that particular attribute changed.
Ugh, that would mean using the representation to infer which
resource is changing. That’s RPC thinking: you are putting
addressing information in the entity body. Don’t do that.
Resource addressing belongs in the URI. The representation in
a resource should describe the *state* of the resource that’s
changing, not *which* resource is changing.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
On Nov 3, 2007, at 2:43 AM, A. Pagaltzis wrote:
> I meant that you would return 301 for subsequent requests to
> /bookings/{id} not in response to the DELETE itself.
>
> A DELETE doesnt have to result in any particular status on
> subsequent requests it just would be deceitful for the server
> to return 2xx if it doesnt plan to do *some*thing in response
> to the request.
>
> But per RFC 2616 the response to the DELETE *itself* SHOULD be
> either 200, 202 or 204. So in your case the best approach would
> be to return 200 and put a representation with a link to the new
> address in the response body.
>
Good.
> > That would be: I have a resource "budget", and to "approve" it
> > I send its representation via PUT to /budgets/approved/{id},
> > where the ID is the one of the original resource?
>
> No, that would mean you want to store the representation of that
> budget in /budgets/approved/{id} which isnt what youre after.
>
> Rather, you would PUT a body like <approval>approved</approval>
> to /budgets/{id}/approved.
>
Ahhh, it starts to take shape. Thanks so much!
So, in that solution /budgets/{id}/approved becomes a resource. That
could be a RW atrribute just fine. Would it make sense to publish that
as a write-only resource? I mean, the service is the one who decides
what to publish as GETable and what not right? Or does REST assume all
resources are GETable (modulus authorization)?
Even if the flag is not PUTable via /budgets/{id} would it be OK to
include it as an attribute in the representation of a budget? Would
you as a client expect it as a nested resource with its own link, or
would it be OK to have it as a plain boolean attribute? Or would you
require an additional request to /budgets/{id}/approved? I start to
have answers to those questions but would like to validate them.
As far as RESTfulness is concerned would be OK to send a boolean
(string "true") to /budgets/{id}/approved instead of that "approved"?
> (Actually, you would PUT it to whichever approval URI the
> resource at /budgets/{id} links to. The client doesnt
> spontaneously construct URIs, it only follows links and forms.)
>
> > Wouldn't be a RESTful way to express this RPC-style action
> >
> > Set the approved flag of budget with URL foo to "approved"
> >
> > ? That is you pass the URL, not the representation.
>
> That would work as well POST <budget href="/budget/{id}"/> to
> /approved_budgets. Or budget_uri=/budget/{id} I suppose. (Query
> string format.)
>
In that approach I would get a 201 with a Location, and a GET for that
new resource would give the same representation I send? That is, a
link, not "a budget"?
-- fxn
[ Attachment content not displayed ]
Inherent to message-based architectures is the inclusion of operations in the message. I use message-based architecture here, because I see it as being different from so-called RPC. REST-based architecture somewhat supports message-based architectures via the use of "overloaded POST". I can instruct a URI on what operation to perform based on the contents of the POST body. However, the beauty of REST is exposed when semantic (business) state transitions are exposed as (sub)resources at well-defined URIs. The 'PUT' to the (sub)resource simply becomes a straightforward "update" to the resource itself - no hidden meaning is in it. For example, I could POST a "ship" instruction to \orders\[order_id] as the following: <state>ship</state> <qty>5</qty> However, if I want to "cancel" that same order using the same resource (URI), the body of my POST would change to: <state>cancel</state> <qty>0</qty> In my opinion, a much cleaner design is to create distinct (sub) resources for each state transition i.e. \orders\[order_id]\shipped and \orders\[order_id]\canceled then PUT indicating the need for the creation of the new state while updating the underlying resource. The PUT would return (at least) a link to the next logical state - it could return links to all possible "next" states. A GET on these (sub)resources would return true or false to indicate whether the parent resource is at that current state (business). An interesting result of such design is that it forces a designer to look at a system (and the entities in system) in the context of a state machine. State machines are powerful! I've been looking at REST for about a year and some these ideas are just beginning to crystallize. Any thoughts on this?
Is there a restful convention for creating or changing association attributes that relates resources to each other? I'm specifically wondering in the case where the relationship between the two resources is modeled in a plain SQL JOIN table (in rails called a has and belongs to many association) For example I have a series of Activities each of which can cover many Subjects and Subjects can have many Activities. I can refer to a scoped collection using nested resources like this: GET /subjects/5 => biology Then GET /subjects/5/activities Will return all the activities that cover the subject biology. Now I'd like to add the subject biology to activity 18 but I don't need to update any other part of the activity 18 resource. It might be clear to do this: POST /subjects/5/activities/18/belongs_to A more natural language url form might look like this: POST /activities/18/belongs_to/subjects/5 Of course since Activities can have many Subjects and Subjects can have many Activities then using an ORM like ActveRecord there is either a join table to connect the two models or a richer has_many :through table. A plain join table has no resource manifestation other than the relationship between the two resources -- the records in the join table have no ids that refer to them specifically. I'm wondering of there are better ways to model this?
On 11/3/07, Stephen Bannasch <stephen.bannasch@...> wrote: > POST /subjects/5/activities/18/belongs_to Why not just /subject/5/activities/18 ?
Hi amaeze77, ....got a real name we can use? On 04.11.2007, at 16:56, amaeze77 wrote: > Inherent to message-based architectures is the inclusion of > operations in the message. I use message-based architecture here, > because I see it as being different from so-called RPC. > > REST-based architecture somewhat supports message-based architectures > via the use of "overloaded POST". I can instruct a URI on what > operation to perform based on the contents of the POST body. > If you do that you are *not* implementing a RESTful system. > However, the beauty of REST is exposed when semantic (business) state > transitions are exposed as (sub)resources at well-defined URIs. > The 'PUT' to the (sub)resource simply becomes a > straightforward "update" to the resource itself - no hidden meaning > is in it. It looks like you are digging around in the right corners, but you are not quite there. What is important for the business transaction going on between the two parties is proper alignment of their respective states with regard to the overall business process (e.g. order-acceptance). The order in your example should IMHO not keep the state of the ordering process but be understood as a business document that indicates a certain significant state change of one party and communicates it to the other. An order for example tells the seller that the buyer has changed from 'maybe-wanting-to-order' to 'having ordered'. The technical HTTP response of the seller process then tells the buyer if the seller has received the order (no acceptance yet!). At this point the states of buyer and seller are aligned (they both know what state the other is in with regard to the process). The next state alignment to be made would be for the seller to tell the client it accepts (or rejects) the order and a different business document would be used for that message. Just as in traditional business making based on postal mail. If the buyer wants to cancel the order, it sends an order cancel document to wherever the seller told the client to send it to. Now the 'hypermedia as the engine of application state constraint' enters the scene: In order for the two parties to conduct business, they have to have shared understanding of the business documents (e.g. both must know and agree what an order looks like) and of the possible state alignments (e.g. order cancelation). When a particular state alignment can be initiated and where is communicated by one party to the other using hypermedia. For example would the seller include in the order acceptance message some URI for the client to send cancelations to. The client, knowing about the meaning of a cancellation, would understand this 'form' (-> google for Mark's RDFForms) and keep the URI in case cancellation must be made. The beauty is that the coupling is minimized (you cannot do these state alignments with less shared knowledge) and therefore the freedom for both parties to change is maximized. All other software architectural styles that enable these kinds of coordinations require more coupling. You might want to take a look at UBL[1] for the business docs necessary. HTH, Jan [1] http://docs.oasis-open.org/ubl/os-UBL-2.0/UBL-2.0.html > > For example, I could POST a "ship" instruction to \orders\[order_id] > as the following: > <state>ship</state> > <qty>5</qty> > > However, if I want to "cancel" that same order using the same > resource (URI), the body of my POST would change to: > <state>cancel</state> > <qty>0</qty> > > In my opinion, a much cleaner design is to create distinct (sub) > resources for each state transition i.e. \orders\[order_id]\shipped > and \orders\[order_id]\canceled then PUT indicating the need for the > creation of the new state while updating the underlying resource. > The PUT would return (at least) a link to the next logical state - it > could return links to all possible "next" states. A GET on these > (sub)resources would return true or false to indicate whether the > parent resource is at that current state (business). > > An interesting result of such design is that it forces a designer to > look at a system (and the entities in system) in the context of a > state machine. State machines are powerful! > > I've been looking at REST for about a year and some these ideas are > just beginning to crystallize. > > Any thoughts on this? > > > > > Yahoo! Groups Links > > >
At 1:25 PM -0600 11/4/07, Karen wrote: >On 11/3/07, Stephen Bannasch <stephen.bannasch@...> wrote: > > POST /subjects/5/activities/18/belongs_to > >Why not just /subject/5/activities/18 ? OK, assuming that /activities/18 and /subjects/5 already exist then I would assume: PUT /activities/18 updates the existing activity 18 resource PUT /subjects/5 updates the existing subject 5 resource GET /subjects/5/activities gets a collection of activities that are associated with subject 5 POST /subjects/5/activities creates a new activity and associates it with subject 5 PUT /subjects/5/activities/18 updates the existing activity 18 resource AND associates it with subject 5. How might I then reverse that operation when I realize I was wrong and that activity 18 is actually not part of subject 5? I don't think I'd want to do this: DELETE /subjects/5/activities/18 What I was looking for was a way of specifying the association without having to update the activity 18 resource itself. Practically I'd like to be able to do this in a restful way without having to first load the activity 18 resource just to save all the same data back. If I treat the association itself as a resource I could do something like this: POST /subject_activity_association and in the body specify subject 5 and activity 18 (with ids or urls). I don't like this much unless I am also storing other information in this association resource -- perhaps a strength-factor for the association or a user_id to keep track of who created the association. When making an association between two resources where the relationship is many-to-many a common implementation might be to just use a join table with each row holding just two values: the ids of the two related resources. At least with this implementation the association doesn't really have much of an independent existence. Brainstorming ... Perhaps if separate association declarations from the join table implementation and treat them as a separate resource ... ? In this case if I started from scratch and created 3 activities and 3 subjects and didn't create any associations between these two types of resources I might also need to create association declaration resources with the following values: activity subject associated ------------------------------ 1 1 false 2 1 false 3 1 false 2 2 false 3 2 false 3 3 false In this implementation there is always a resource that relates any activity to any subject and it's value is true or false. In this model there would be no meaning to a POST or DELETE to one of these resources, they would be creaqted and destroyed as a dependent action on the creation and destruction of the resources they relate. This implies that Restfully the only operations would be GET and PUT.
On Nov 3, 2007, at 4:57 PM, Dmitriy Kopylenko wrote:
> Hello REST crowd. I've just discovered this group few days ago and
> immediately found it interesting.
>
> I'm new to REStful architectural style approach of building loosely
> coupled ROA services, but as I learn it, I find it very compelling...
>
> Enough prose... I face a real world design issue, and I'd like to
> seek an advise from a group of subject matter experts.
>
> At our organization (a state university) we have built a central
> "grading and class rosters" web based system for exposing class
> rosters to professors of different courses and allowing them to
> "grade" their classes online during so called grading periods. The
> data that is persisted into the relational DB, eventually gets into
> the mainframe "student record" system using overnight batch
> processes, etc. - pretty standard architecture. As part of this
> online grading system, we have built a capability for professors to
> download their respective class rosters (as excel spread sheets),
> do their thing (grading) off line, and later on, "upload" their
> final grades into the system (first saving the excel as CSV and
> accepting it as a multi part file upload). The rules of such an
> upload is that:
>
> 1) It's supposed to be done during the open grading period
> 2) Any students who already have a grade will be ignored with any
> new grade values
> 3) Any invalid grade values for any student (for the course) will
> be ignored
> 4) Students with "empty" grades will be ignored and would be
> "eligible" to receive a valid grade in a subsequent upload
> 5) A course upload contains multiple sections of a course (class
> rosters) e.g. course 123 contains sections 12, 23, 44 (with
> multiple students in each)
> 6) Each grading period is based on the "semester" (year and term
> values e.g. 2007 01 - for Winter term of 2007, etc.)
>
> Now, there is a need to expose that kind of functionality to "3rd
> party" course management systems which have their own grade book
> type of applications, which would allow them programmatically post
> final grades into our "central" grading system. Our immediate
> thoughts were to expose a RESTful resource for submitting grades
> using the same CVS representation that is currently being used on
> the "human web" system. That would actually work pretty well, as we
> will be able to reuse close to a 100% of our existing infrastructure.
Yes, I see no problem with that -- just use POST on the collection
URI that represents the class for which grades are being posted.
A CSV format is good enough to do the job.
> Now actually comes the design question in terms of RESTful design.
> The current business rules for the "human web" grades upload allow
> for "partial submission of the course's grades (good ones will go
> and bad ones (invalid, duplicates will be just skipped and reported
> as such to the user). No atomic "all or nothing". So we are facing
> two design choices for RESTful resources:
>
> 1) Coarse-grained "grade-upload" resource which will take a
> representation (CSV) of all the sections (students) within a given
> course for a given semester:
>
> PUT /grade-upload/{year}/{term}/{course}
>
> ...
>
> "section",student-id","grade"
>
> In this case how do we communicate the partial submission (in
> terms of proper HTTP codes) and also how do we convey the
> troubled records back to the client?
Use POST instead of PUT. The response format would depend on what the
sender is able to accept (HTML for browsers, CSV for excel, etc.).
> 2) Fine-grained "grade-posting" resource for each student for any
> given course, section and semester:
>
> PUT /grade-posting/{year}/{term}/{course}/{section}/{student-id}
>
> ...
>
> "grade"
>
> In the second case, we should be able to clearly communicate any
> error conditions with standard HTTP codes and the client software
> should be able to build any kind of "mashups" on top of it.
I would make that an additional interface.
> One "blank stare" I get from my management when I actually proposed
> the second solution was that the actual functionality requested
> from the 3rd party course management systems was the "batch grade
> upload" and the fine-grained resource for each student would be
> "wasteful" in terms of multiple HTTP requests, etc.
>
> So, what would be the optimal and "true RESTful" solution in this
> situation?
Both. Use POST for the batch grade upload and PUT to provide a
more fine-grained resource interface. The POST interface can be
described by a form when the user browses to the course section
using the GET interface.
You could also use PATCH instead of POST, assuming you define a
media type for overlapping CSV records, but that would only be
advantageous if the differencing mechanism (valid/invalid/skipped
grades) was somehow generic enough to apply to other resources.
....Roy
On 11/4/07, Stephen Bannasch <stephen.bannasch@...> wrote: > PUT /subjects/5/activities/18 > > updates the existing activity 18 resource AND associates it with subject 5. No, it should just be the association. That's the key: the relationship *is* a resource all to itself. Same with a transaction (the example in the REST book). > How might I then reverse that operation when I realize I was wrong and that activity 18 is actually not part of subject 5? I don't think I'd want to do this: > > DELETE /subjects/5/activities/18 That's why the resource is just the relationship. > What I was looking for was a way of specifying the association without having to update the activity 18 resource itself. That may not be the URL you want to use, of course. But yes, just specify the association. > I don't like this much unless I am also storing other information in this association resource -- perhaps a strength-factor for the association or a user_id to keep track of who created the association. Why? I mean, you can if the app seems to need it, of course. But it's okay to have a resource that's very simple. > When making an association between two resources where the relationship is many-to-many a common implementation might be to just use a join table with each row holding just two values: the ids of the two related resources. At least with this implementation the association doesn't really have much of an independent existence. The association doesn't need to map directly to a database resource. That's one of the things that's hard to let go of, or at least was for me. There's nothing wrong with exposing, for instance, a single flag out of a database record as its own separate resource, if it makes sense to need it that way. I ended up doing this to maintain a newsrc-type list of read messages. The newsrc list might, for instance, look like this: "1-50, 52, 54-55, 57". Send a DELETE to http://blahblahblah/55 and internally the newsrc line gets changed to "1-50, 52, 54, 57". The "resource" isn't even an entire *field* in the database in this case. > This implies that Restfully the only operations would be GET and PUT. Perhaps. Or you could use existence or not as your true/false value - effectively the newsrc example does that, where if the number exists in the range, then "read=true." But yeah, not all resources, especially very simple ones, justify all HTTP methods.
In rest-discuss, Stephen Bannasch wrote: > OK, assuming that /activities/18 and /subjects/5 already exist > then I would assume: [...] > PUT /subjects/5/activities/18 > > updates the existing activity 18 resource AND associates it with > subject 5. I presume that you would make it so if you wish that it be so. > How might I then reverse that operation when I realize I was wrong > and that activity 18 is actually not part of subject 5? I don't > think I'd want to do this: > > DELETE /subjects/5/activities/18 The "DELETE" request seems entirely appropriate to me. Why do you think that you wouldn't want to send that request? > What I was looking for was a way of specifying the association > without having to update the activity 18 resource itself. If you want to specify an association between "/subjects/5" and "/activities/18" without making a request upon "/activities/18", you have good options. If you want to specify an association between "/subjects/5" and "/activities/18" without affecting "/activities/18" , I'd want to know why you harbor such a goal. > If I treat the association itself as a resource I could do something like this: > > POST /subject_activity_association > > and in the body specify subject 5 and activity 18 (with ids or urls). You could do something like this: PUT /subject_activity_association/5,18 Host: www.example Content-Type: text/uri-list; charset=UTF-8 Content-Length: 65 http://www.example/subjects/5 http://www.example/activities/18 > When making an association between two resources where the > relationship is many-to-many a common implementation might be to > just use a join table with each row holding just two values: the > ids of the two related resources. At least with this implementation > the association doesn't really have much of an independent > existence. > > Brainstorming ... > > Perhaps if [we] separate association declarations from the join > table implementation and treat them as a separate resource ... ? Why should you let the implementation exert such influence on the public interface? Expose the URIs that you want and concoct some reasonably speedy implementation to prop them up. > In this case if I started from scratch and created 3 activities and > 3 subjects and didn't create any associations between these two > types of resources I might also need to create association > declaration resources with the following values: > > activity subject associated > ------------------------------ > 1 1 false > 2 1 false > 3 1 false > 2 2 false > 3 2 false > 3 3 false I would automate the works so that, by default, activities and subjects do not associate. > In this implementation there is always a resource that relates any > activity to any subject and it's value is true or false. I would convey the semantic in the Status-Code in HTTP. If the relationship exists, a "GET" request upon the corresponding URI prompts a Status-Code of "200". If the relationship does not exist, a "GET" request upon the corresponding URI prompts a Status-Code of "404". Using the Status-Code to convey the fundamental semantic allows the entity-body to bear hypertext, such as links to the members of the association. > This implies that Restfully the only operations would be GET and > PUT. REST doesn't enumerate the set of uniform operations, nor does REST specify how to name any of the uniform operations. HTTP/1.1 isn't the final RESTful protocol, and won't be the best when Waka takes over. -- Etan Wexler.
On 04.11.2007, at 23:04, amaeze77 wrote: > --- In rest-discuss@yahoogroups.com, Jan Algermissen > <algermissen1971@...> wrote: >> >> Hi amaeze77, ....got a real name we can use? > > Amaeze works :) > >> On 04.11.2007, at 16:56, amaeze77 wrote: >> >>> Inherent to message-based architectures is the inclusion of >>> operations in the message. I use message-based architecture here, >>> because I see it as being different from so-called RPC. >>> >>> REST-based architecture somewhat supports message-based > architectures >>> via the use of "overloaded POST". I can instruct a URI on what >>> operation to perform based on the contents of the POST body. >>> >> >> If you do that you are *not* implementing a RESTful system. > > Why? Because in your opinion I am using overloaded POST in the wrong > way? Overloaded POST is used for batches (for example), is that no > longer RESTful Tunneling operations through POST breaks REST's uniform inerface constraint. You end up with client and server being coupled on the operation semantics. > > In my example, I am modeling the state transitions of an order after > it has been "received" so in essence I don't have two parties > communicating. The system is moving the order along internally. If you distribute your order processing, you effectively do have two parties and you should treat the coordination between them in the same way. If that seems overly complex then maybe REST isn't the right choice for the back end architecture? > Hence I'm not sure that the rest of your post is really applicable to > me even though I think I get what you're getting at. It seems like > our contexts may be different. > > I am curious, what "verb" are you using to communicate these state > changes? You'd use POST because the invocations are not idempotent. Jan > >> At this point the states of buyer and seller are aligned (they > both >> know what >> state the other is in with regard to the process). >> >> The next state alignment to be made would be for the seller to > tell >> the client it >> accepts (or rejects) the order and a different business document >> would be used for >> that message. Just as in traditional business making based on > postal >> mail. >> >> If the buyer wants to cancel the order, it sends an order cancel >> document to >> wherever the seller told the client to send it to. >> >> Now the 'hypermedia as the engine of application state constraint' >> enters the scene: >> In order for the two parties to conduct business, they have to > have >> shared understanding >> of the business documents (e.g. both must know and agree what an >> order looks like) and >> of the possible state alignments (e.g. order cancelation). When a >> particular state alignment >> can be initiated and where is communicated by one party to the > other >> using hypermedia. >> For example would the seller include in the order acceptance > message >> some URI for the >> client to send cancelations to. The client, knowing about the > meaning >> of a cancellation, would >> understand this 'form' (-> google for Mark's RDFForms) and keep > the >> URI in case cancellation >> must be made. >> >> The beauty is that the coupling is minimized (you cannot do these >> state alignments with less >> shared knowledge) and therefore the freedom for both parties to >> change is maximized. All >> other software architectural styles that enable these kinds of >> coordinations require more >> coupling. >> >> You might want to take a look at UBL[1] for the business docs > necessary. >> >> HTH, >> Jan >> >> [1] http://docs.oasis-open.org/ubl/os-UBL-2.0/UBL-2.0.html >> >> >> >>> >>> For example, I could POST a "ship" instruction to > \orders\[order_id] >>> as the following: >>> <state>ship</state> >>> <qty>5</qty> >>> >>> However, if I want to "cancel" that same order using the same >>> resource (URI), the body of my POST would change to: >>> <state>cancel</state> >>> <qty>0</qty> >>> >>> In my opinion, a much cleaner design is to create distinct (sub) >>> resources for each state transition i.e. \orders\[order_id] > \shipped >>> and \orders\[order_id]\canceled then PUT indicating the need for > the >>> creation of the new state while updating the underlying resource. >>> The PUT would return (at least) a link to the next logical state - > it >>> could return links to all possible "next" states. A GET on these >>> (sub)resources would return true or false to indicate whether the >>> parent resource is at that current state (business). >>> >>> An interesting result of such design is that it forces a designer > to >>> look at a system (and the entities in system) in the context of a >>> state machine. State machines are powerful! >>> >>> I've been looking at REST for about a year and some these ideas > are >>> just beginning to crystallize. >>> >>> Any thoughts on this? >>> >>> >>> >>> >>> Yahoo! Groups Links >>> >>> >>> >> > >
Thanks Karen and Etan for responding, I've replied to both of your comments in this post: At 9:53 PM -0600 11/4/07, Karen wrote: >On 11/4/07, Stephen Bannasch <stephen.bannasch@...> wrote: >> PUT /subjects/5/activities/18 >> >> updates the existing activity 18 resource AND associates it with subject 5. > >No, it should just be the association. That's the key: the >relationship *is* a resource all to itself. Same with a transaction Well right now (the way I've implemented it) that url will update activity 18 AND create an association between activity 8 and subject 5. The following: PUT /activities/18 would update activity 18 without updating any associations between activity 18 and any other resource. And this: PUT /gradelevels/7/activities/18 will update activity 18 AND create an association between activity 8 and gradelevel 7 if it doesn't already exist. This: POST /subjects/5/activities Will create a new activity AND associate it with subject 5. But I agree with you that it makes sense that the association should be a resource all to itself. It isn't right now and I'm wrestling with the best way to expose it. >(the example in the REST book). What REST book? At 4:02 AM +0000 11/5/07, Etan Wexler wrote: >If you want to specify an association between "/subjects/5" and >"/activities/18" without making a request upon "/activities/18", you >have good options. If you want to specify an association between >"/subjects/5" and "/activities/18" without affecting "/activities/18" >, I'd want to know why you harbor such a goal. The association is a many-to-many association and I implement this as a separate object with a link to both of the associated objects. In the domain I am modeling activity 18 doesn't change when associations are made between it and subjects 5 and 6 and the fact that I have separate objects that link activity 18 with subjects 5 and 6 is an implementation detail. What is important is that the system can (besides just being able to restfully interact with activity and subject resources themselves) support these interactions: 1) respond with all the subjects associated with an activity 2) respond with all the activities associated with subject 3) assert that an association exists between an activity and a subject 4) assert that an association does not exist between an activity and a subject Requests 1 and 2 return a response that is not the associations but the collection of resources the associations specify. Looking at this from the object relationships I'm modeling -- requests 3 and 4 can be thought of as just setting the boolean state of an existing association. Another aspect of this model is that it seems unreasonable for a client to have to know whether an association exists before asserting the association is true or false. At 9:53 PM -0600 11/4/07, Karen wrote: >The association doesn't need to map directly to a database resource. >That's one of the things that's hard to let go of, or at least was for >me. There's nothing wrong with exposing, for instance, a single flag >out of a database record as its own separate resource, if it makes >sense to need it that way. > >I ended up doing this to maintain a newsrc-type list of read messages. >The newsrc list might, for instance, look like this: "1-50, 52, 54-55, >57". Send a DELETE to http://blahblahblah/55 and internally the newsrc >line gets changed to "1-50, 52, 54, 57". The "resource" isn't even an >entire *field* in the database in this case. That's a good point to keep in mind -- I've definitely started with the idea that a resource <=> object <=> database record. Now I'm thinking this might be clear: Assert this association is true: PUT /subjects/5/activities/18/association Assert this association is false: DELETE /subjects/5/activities/18/association Determine the state of this association: GET /subjects/5/activities/18/association If the association is true (resource exists) return: HTTP 204 No Content. If the association is false (resource does not exist) return: HTTP 404 Not Found. If either associated resource doesn't exist (the conditions necessary for the resource to exist at all are not true) return: HTTP 400 Bad Request. Extending the way I've already implemented part of this would also means this alternate url points to the same resource: /activities/18/subjects/5/association
[ Attachment content not displayed ]
On 11/5/07, Dmitriy Kopylenko <dmitriy.kopylenko@...> wrote:
>
> Thanks Roy for your advice. So, what would be the correct HTTP response code in case of "partial grade upload" using POST interface? For example, let's say we have
>
> POST /grade-upload/{year}/{term}/{course}
>
> ...
>
> sec1, "John Doe", A
> sec2, "Jane Doe", B
> sec21, "Some Guy", XX
>
> and server accepts two valid grades for Jane and John, but "skips" an invalid grade for "Some Guy". So the response entity body could send back a representation of "invalid" records along with error messages, etc.,
You could return a representation of the state of the grade list after
processing the request. You would indicate that it was a
representation of the list by including a Content-Location header.
> but in terms of the proper HTTP code, I'm not sure. Does 200 seem like a good idea in the situation like this (partial acceptance of the resource's state/representation)? I couldn't find any suitable HTTP codes for this situation.
Funnily enough, because of some problems with the use of
Content-Location, a new response code has been proposed which is to
mean the same thing as when the Content-Location header has the value
of the Request-URI.
Mark.
--
Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca
Coactus; Web-inspired integration strategies http://www.coactus.com
On 11/5/07, Stephen Bannasch <stephen.bannasch@...> wrote: > Well right now (the way I've implemented it) that url will update activity 18 AND create an association between activity 8 and subject 5. Which is okay, as long as you don't mind an update that doesn't update anything. If you want to be able to just create the association you want to expose the association by itself. Especially if you want to simply delete the association...otherwise you have to do something like DELETE /gradelevels/7/activities/18 and have it not actually delete activity 18 (for that you'd have to DELETE /activities/18 explicitly), and that's kind of non-intuitive. (The former *would* legitimately delete the representation, but...) > What REST book? Why, the book you're supposed to read after Dr. Fielding's dissertation, of course. RESTful Web Services: http://www.oreilly.com/catalog/9780596529260/ > Now I'm thinking this might be clear: Seems reasonable to me. I'd leave off the "association" part if it was me, but that's purely aesthetic.
Is it just me or do all these discusions about URI form and pattern seem to put undue emphasise on URI composition at the expense of hypermedia-driven application state? YS
On 11/5/07, Yohanes Santoso <yahoo-rest-discuss@...> wrote: > Is it just me or do all these discusions about URI form and pattern > seem to put undue emphasise on URI composition at the expense of > hypermedia-driven application state? Why are they mutually exclusive? So long as the machine side of things assumes the URI is opaque, I don't think there's a problem with using a non-opaque string as a shorthand for purposes of the discussion here. That's why I mentioned "aesthetics" - it ought only be us humans care what the thing looks like. (Of course, I may be a heretic for preferring non-opacity for the human side of things. If a human wants to construct an URL, more power to him, he's got a pretty sophisticated error-recovery system. If the same human wants to code his client app to construct the URL, it's his own darn fault things break when the schema changes - machines shouldn't mind the tedium of following a link trail every time, and aren't so good at figuring out what happened when things didn't work. Let all play to their strengths.)
Karen <karen.cravens@...> writes: > On 11/5/07, Yohanes Santoso <yahoo-rest-discuss@...> wrote: >> Is it just me or do all these discusions about URI form and pattern >> seem to put undue emphasise on URI composition at the expense of >> hypermedia-driven application state? > > Why are they mutually exclusive? I don't think 'undue emphasise' means exclusivity, and I am certainly not saying and suggesting that they are mutually exclusive. I didn't mean to take a holier-than-thou attitude. I certainly has obsessed over the URI's look in the past, and I am still doing it at times. Rather, I want to know if more emphasise should be put on the hypermedia aspect. YS
I recently presented on REST vs. SOA at a BeJUG (Belgian Java User Group) event - I'd appreciate your comments/corrections: http://www.innoq.com/blog/st/2007/11/04/rest_talk_at_bejug_video.html Slides are available online, too: http://www.innoq.com/blog/st/2007/10/09/rest_vs_soa_presentation.html Stefan
I watched it last night and I think it'll be the standard link I send folks who want to know what the whole REST thing is about (especially administrators with some notion of web services and SOA but not exactly sure how REST fits into all of that). Key points: -REST has "won" here in the echo chamber, but not necessarily out there in the world. -REST is simple, not easy. -Doing REST correctly means changing the way you do things -- WS-* often just means using different tools to do the same things you've been doing. REST, on the other hand requires fundamentally rethinking basic assumptions & design principles. (I hope that's a relatively fair paraphrasing...!) I will note that I have occasionally over-sold the "REST is easy" idea and generally regretted it later. All that said, I think the presentation does a nice job of describing REST's real benefits. -peter keane daseproject.org On Mon, 5 Nov 2007, Stefan Tilkov wrote: > I recently presented on REST vs. SOA at a BeJUG (Belgian Java User > Group) event - I'd appreciate your comments/corrections: > > http://www.innoq.com/blog/st/2007/11/04/rest_talk_at_bejug_video.html > > Slides are available online, too: > http://www.innoq.com/blog/st/2007/10/09/rest_vs_soa_presentation.html > > Stefan >
On 11/5/07, Yohanes Santoso <yahoo-rest-discuss@...> wrote:
> I don't think 'undue emphasise' means exclusivity, and I am certainly
> not saying and suggesting that they are mutually exclusive.
"At the expense of" seemed to imply that, I guess. But I don't see it
as being a tradeoff at all. Unless you assume we have limited time to
spend posting to the list, I suppose.
But really, most of the time when it looks like we're discussing the
trivialities of URI definition, what we're really looking at is
defining the resources that are being exposed. And that's a big deal,
I think, and there is more depth there than the simplicity of
hypermedia's "put in links!" The latter's not unimportant, it's just
that there's seldom anything to consider there.
> Rather, I want to know if more emphasise should be put on the
> hypermedia aspect.
I won't argue with that. And it's something I struggle with in
Wirebird: you can request a representation of any given resource...
no, wait. TOO DARN MANY RE* WORDS IN REST!
Ahem.
You can request a page in any flavor ('scuse me while I sidestep
jargon), either by specifying HTTP_ACCEPT or, if there are multiple
valid choices there, by the extension, with text/html being the
default (because Wirebird is both a web site and a service, intended
for use with dumb semi-compliant browsers as well as
as-yet-hypothetical smart/automated clients).
The HTML version, logically enough, is intended for human consumption.
In some ways, it's more hypermedia-complete than other versions,
because it has forms defining precisely how it expects its POSTs and
PUTs and DELETEs to happen (okay, so they're all overloaded POSTs, but
you work with what you have, y'know?). In other ways, it has the
potential to be less complete, since it's template-driven, and
sometimes you don't want to overwhelm the user with options - but
unless a template is actually broken, you can always "get there from
here."
The Perl and JSON versions, and the generic-XML version when I get off
my lazy butt and implement it (yeah, you can tell where I fall on the
JSON vs. XML wars: the lazy side), all just do a big fat dump of every
value they've come up with for that particular representation... so
they should be hypermedia-complete (when everything's finished,
anyway), sometimes redundantly ("there's more than one way to get
there from here.") In practice, they're not as complete as the HTML
version because I can alter things in the template: "a href='<tmpl_var
some_link>/addendum.html'" and such, but when I'm not being lazy those
are supposed to find their way back into the raw values.
It's the RSS and Atom versions that bother me. And maybe they
shouldn't, since RSS (especially) isn't exactly a standard designed
for such general use. (Atompub, on the other hand, is high on my list
of "things I really need to investigate and, no doubt, implement")
They don't really stand alone, though there's an RSS and Atom version
of every page in the magic <link> headers. But, just as a
for-instance, the RSS feed for a single "page" (in the Wirebird sense,
meaning a single mailing-list/forum/newsgroup post, wiki page, blog
entry, or blog comment) has no external links at all, since it's a
leaf node:
http://forum.wirebird.com/page/gamehawk/main/00000050.rss
The feed is for page 50, which is also the sole entry in the feed.
Now, granted, it doesn't make a great deal of sense to even *have* a
feed at leaf-node level, but if you look at the HTML version:
http://forum.wirebird.com/page/gamehawk/main/00000050.html
you see there's twenty or thirty links to various other bits of
Wirebird, near and far. Granted I don't need to stuff the entire
category/group/topic tree into every RSS feed, but at least the
breadcrumb trail would be nice. And packing it all into HTML in
description fields seems like cheating. I know there's more RSS fields
than I've gotten around to using (lazy!) but I don't know that there's
enough to be hypermedia-complete across all the things Wirebird can
serve (categories, groups, topics, threads, pages, author profiles,
etc.).
Or do I just rely on the fact that any RESTful client ought to be
smart enough (on seeing the "Vary: Accept" and "Content-Location:
http://forum.wirebird.com/page/gamehawk/main/00000050" headers) to ask
for a different flavor, knowing that XML or JSON is more likely to get
it something hypermedia-complete?
On Nov 5, 2007, at 5:54 AM, Dmitriy Kopylenko wrote:
> Thanks Roy for your advice. So, what would be the correct HTTP
> response code in case of "partial grade upload" using POST
> interface? For example, let's say we have
>
> POST /grade-upload/{year}/{term}/{course}
>
> ...
>
> sec1, "John Doe", A
> sec2, "Jane Doe", B
> sec21, "Some Guy", XX
>
> and server accepts two valid grades for Jane and John, but "skips"
> an invalid grade for "Some Guy". So the response entity body could
> send back a representation of "invalid" records along with error
> messages, etc., but in terms of the proper HTTP code, I'm not sure.
> Does 200 seem like a good idea in the situation like this (partial
> acceptance of the resource's state/representation)? I couldn't find
> any suitable HTTP codes for this situation.
Keep in mind that when you do POST there is no longer a shared
understanding of the resource's state. As such, I would probably
respond with 200 and the same index/form page with the latest
grades updated (blank if invalid or empty), and a link to the
equivalent content in the form of text/csv. Responding with 200
and a Content-Location field equal to the requested URI should
tell the client that the representation being returned is the
resulting resource state.
OTOH, this type of content manipulation is what JCR-based
web content management products were designed to handle almost
automatically. For example, Day Communiqu has built-in support
for spreadsheets and can handle both file-like updates (mapping
onto the content hierarchy) or direct manipulation of the
individual record fields.
....Roy
Hey, I am a noob too , so forgive any mistakes . > In my opinion, a much cleaner design is to create distinct (sub) > resources for each state transition i.e. \orders\[order_id]\shipped > and \orders\[order_id]\canceled then PUT indicating the need for the > creation of the new state while updating the underlying resource. Are you actually creating a new state or are you just changing the state at the server ? I believe POST to /orders/[order_id] is fine. > The PUT would return (at least) a link to the next logical state - it > could return links to all possible "next" states. A GET on these This can be done with a <link> element in what is returned. Alternatively , the next logical state that you mention can also be returned, like the way the ATOM Pub sends the URI of the newly created blog post when you POST a new entry to it. > For example, I could POST a "ship" instruction to \orders\[order_id] > as the following: > <state>ship</state> > <qty>5</qty> > > However, if I want to "cancel" that same order using the same > resource (URI), the body of my POST would change to: > <state>cancel</state> > <qty>0</qty> I think what you are raising here is the problem that you are sending the verbs SHIP / CANCEL to the URI through the POST body. Note that this isn't a constraint in REST , REST doesn't restrict you to use only the HTTP Verbs. That is a constraint of HTTP, as in the HTTP writers think that these are all the verbs you need for resources on the web. If you think that your application needs more, it is perfectly RESTful to do so .. but don't use HTTP. > In my opinion, a much cleaner design is to create distinct (sub) > resources for each state transition i.e. \orders\[order_id]\shipped This is where you are going wrong. The URI points to resources / state not state transitions. \order\[order_id]\shipped is not a resource, so it is wrong to write a URI for it. /order/[order_id]/status is a resource, this could be used. > An interesting result of such design is that it forces a designer to > look at a system (and the entities in system) in the context of a > state machine. State machines are powerful! Exactly. So a POST to the /order/order_id/ changes its state from shipped to cancelled. What is wrong with this ? Again, I am a newbie so I might be very wrong! but this is my understanding of REST. Regards devdatta
* pkeane <pkeane@...> [2007-11-05 21:40]: > (I hope that's a relatively fair paraphrasing...!) I would say so; however I would add a clause to the last bullet point: > Doing REST correctly means changing the way you do things -- > WS-* often just means using different tools to do the same > things you've been doing. REST, on the other hand requires > fundamentally rethinking basic assumptions & design principles. “… in order to be able to reap the benefits that REST confers.” In practice the RESTfulness of systems is a continuum; esp. if you are implementing a RESTful system in terms of HTTP, whose uniform interface is not always quite exactly the shape you’d want. (This has nothing to do with the critique usually levelled against REST that you can’t express complex systems in terms of resources; you can. It’s just that HTTP as she is spoke would sometimes require you to decompose things as resources that you’d really want to be part of the uniform interface; cf. PATCH for an example of such a case.) And so sometimes the pragmatic thing *with HTTP* can be to violate the uniform interface around the edges. The essence of REST is that having done things in a certain way, you get a system that has various desirable properties. (By trading some others for those.) If you haven’t, you don’t. In practice this means tradeoffs; not in every aspect of an app are the benefits of a RESTful design as useful as in others. This does *not* mean you should break the constraints left and right. The default should be to respect them. The great thing about REST, however, is that it gives you a systematic way to reason about the properties of a system. It provides a formal basis for understanding the consequences of various tradeoffs, so you can make decisions in awareness of their costs. As for “REST is simple, not easy”, that is correct, but the statement always feels weird to me, even a little disingenuous. I would say that REST is a bit of a red herring in there; the point is that *system design* is not easy. REST does not relieve you of that, but then neither can TSOA or any other approach. But REST constrains systems, so in some senses it does make designing them easier, if not easy. Because of loose coupling it also makes certain things manifestly easier. It’s just isn’t magie pixie dust that can do your job for you. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Yohanes Santoso wrote: > Is it just me or do all these discusions about URI form and pattern > seem to put undue emphasise on URI composition at the expense of > hypermedia-driven application state? It's not just you. I'm a bit believer in having readable, hackable, and generally pleasant URIs, but its mostly off-topic for a list about REST.
Speaking of BeJUG.....anyone going to JavaPolis? If so, is anyone setting up a REST BOF and/or meetup type of thing? -Michael pkeane wrote: > > > I watched it last night and I think it'll be the standard link I send > folks who want to know what the whole REST thing is about (especially > administrators with some notion of web services and SOA but not exactly > sure how REST fits into all of that). > > Key points: > > -REST has "won" here in the echo chamber, but not necessarily out there in > the world. > > -REST is simple, not easy. > > -Doing REST correctly means changing the way you do things -- WS-* often > just means using different tools to do the same things you've been doing. > REST, on the other hand requires fundamentally rethinking basic > assumptions & design principles. > > (I hope that's a relatively fair paraphrasing...!) > > I will note that I have occasionally over-sold the "REST is easy" idea and > generally regretted it later. All that said, I think the presentation > does a nice job of describing REST's real benefits. > > -peter keane > daseproject.org > > On Mon, 5 Nov 2007, Stefan Tilkov wrote: > > > I recently presented on REST vs. SOA at a BeJUG (Belgian Java User > > Group) event - I'd appreciate your comments/corrections: > > > > http://www.innoq.com/blog/st/2007/11/04/rest_talk_at_bejug_video.html > <http://www.innoq.com/blog/st/2007/11/04/rest_talk_at_bejug_video.html> > > > > Slides are available online, too: > > http://www.innoq.com/blog/st/2007/10/09/rest_vs_soa_presentation.html > <http://www.innoq.com/blog/st/2007/10/09/rest_vs_soa_presentation.html> > > > > Stefan > >
[ Attachment content not displayed ]
At 1:15 PM -0500 11/5/07, Yohanes Santoso wrote: >Karen <karen.cravens@...> writes: > >> On 11/5/07, Yohanes Santoso <yahoo-rest-discuss@...> wrote: >>> Is it just me or do all these discusions about URI form and pattern >>> seem to put undue emphasise on URI composition at the expense of > >> hypermedia-driven application state? >> >> Why are they mutually exclusive? > >I don't think 'undue emphasise' means exclusivity, and I am certainly >not saying and suggesting that they are mutually exclusive. > >I didn't mean to take a holier-than-thou attitude. I certainly has >obsessed over the URI's look in the past, and I am still doing it at >times. > >Rather, I want to know if more emphasise should be put on the >hypermedia aspect. The uri form is an expression of how the resources are modeled. I am wrestling with how to expose access to be able to manipulate the many-to-many associations between two resources types when these associations are both: 1) not properties of the original resource types 2) implicitly created when the original resource types are created For me getting the uri form right relates to how I model the objects and express this modeling as RESTful resources. At 10:21 AM +0000 11/6/07, Jon Hanna also wrote: >It's not just you. I'm a bit believer in having readable, hackable, and >generally pleasant URIs, but its mostly off-topic for a list about REST. In my experience getting the domain model right and then working out the RESTful expression usually leads to uri forms that are readable and hackable. The essential In this thread I'm working out a model for RESTfully access to an element of my domain that I didn't see any existing design patterns for. I haven't been clear enough in my writing if you are interpreting my goals as creating "readable, hackable, and generally pleasant URIs" -- that's secondary. Regarding "hypermedia-driven application state" ... well ... I'm looking forward to accessing these RESTful resources from external applications soon. I won't be surprised if when I get there I'll find more that I need to change in how I am modeling many-to-many associations between resources.
Hi everyone, If you haven't seen it yet, Mark and I gave a half-day tutorial on the RESTful web at OOPSLA a couple of weeks ago, entitled 'The Web: Distributed Objects Realized!' I have placed the slides and motivational paper up through my blog. http://www.stucharlton.com/blog/archives/000168.html The goal was to try to find unique angles to look at how the Web is both similar and unique to past styles, and particularly how collaborative "system of systems" benefit from looking at architecture in terms of elements, constraints, and emergent properties. I hoped to move beyond the rat holes and politics of the mainstream debate to some degree, so the main argument is built from system requirements: how do you build an information space that can scale globally, and provide incentives to increase participation? Feedback welcome. If anything is confusing, misleading , or plain wrong, please let me know. cheers Stu
Hi Stefan, You might consider adding Microsoft Robotics Studio (MSRS) [1] on the list of shipping Web-derived platforms. As part of MSRS you get both a highly concurrent programming model (called CCR) along with a lightweight application model (called DSS) that extends the Web model with structured data manipulation, event notification, and service composition (called partnering) that ties together loosely coupled services in a late-bound manner. In a chat with Jon Udell [2] we do a walk-through of some specific examples of what DSS applications look like and how they are put together by composing loosely coupled services. It also shows not only how to compose applications using this model but also how it deals with extensibility in a decentralized environment. On our community page [3] you can see a bunch of real-life applications using MSRS and while these are mostly focused on robotics the model obviously is applicable in a much broader context. Finally, everything (except robots!) shown in the video is available in the Microsoft Robotics Studio download [4] which is free for non-commercial use. Thanks, Henrik [1] http://www.microsoft.com/robotics [2] http://blog.jonudell.net/2007/07/25/henrik-frystyk-nielsen-on-the-restful-architecture-of-microsoft-robotics-studio/ [3] http://msdn2.microsoft.com/robotics/aa731519 [4] http://msdn2.microsoft.com/robotics/aa731520 -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Stefan Tilkov Sent: Monday, November 05, 2007 11:15 To: REST Discuss Subject: [rest-discuss] REST Presentation at BeJUG I recently presented on REST vs. SOA at a BeJUG (Belgian Java User Group) event - I'd appreciate your comments/corrections: http://www.innoq.com/blog/st/2007/11/04/rest_talk_at_bejug_video.html Slides are available online, too: http://www.innoq.com/blog/st/2007/10/09/rest_vs_soa_presentation.html Stefan Yahoo! Groups Links
Jon Hanna wrote to rest-discuss in "Re: [rest-discuss] Re: restful way to create an association between 2 resources" on 6 November 2007: > I'm a [big] believer in having readable, hackable, and > generally pleasant URIs, but [it's] mostly off-topic for a list about REST. I've noticed that most of the discussion on rest-discuss is not about REST, but about details of implementing HTTP. The focus on HTTP doesn't bother me, but if it bothers other subscribers, perhaps the time has come for a separate mailing list about HTTP implementation. The new mailing list could allow for tangential subject matter. (The mailing list of the revived HTTP Working Group [see <http://lists.w3.org/Archives/Public/ietf-http-wg/>] has a specific focus and would not be appropriate.) An alternative to creating a new mailing list is to admit that most of us, most of the time, are not interested in discussing the Representational State Transfer per se. How opine you all? -- Etan Wexler.
I'm happy the way it is. On Nov 6, 2007 7:55 PM, Etan Wexler <yahoo.com@...> wrote: > Jon Hanna wrote to rest-discuss in "Re: [rest-discuss] Re: restful way > to create an association between 2 resources" on 6 November 2007: > > > I'm a [big] believer in having readable, hackable, and > > generally pleasant URIs, but [it's] mostly off-topic for a list about REST. > > I've noticed that most of the discussion on rest-discuss is not about > REST, but about details of implementing HTTP. The focus on HTTP doesn't > bother me, but if it bothers other subscribers, perhaps the time has > come for a separate mailing list about HTTP implementation. The new > mailing list could allow for tangential subject matter. (The mailing > list of the revived HTTP Working Group [see > <http://lists.w3.org/Archives/Public/ietf-http-wg/>] has a specific > focus and would not be appropriate.) An alternative to creating a new > mailing list is to admit that most of us, most of the time, are not > interested in discussing the Representational State Transfer per se. > > How opine you all? > > -- > Etan Wexler. > > > > > Yahoo! Groups Links > > > > Hugh
On Nov 6, 2007 8:48 PM, Hugh Winkler <hughw@...> wrote: > I'm happy the way it is. > > > On Nov 6, 2007 7:55 PM, Etan Wexler <yahoo.com@...> wrote: > > An alternative to creating a new > > mailing list is to admit that most of us, most of the time, are not > > interested in discussing the Representational State Transfer per se. > > and... I am interested in discussing REST, but I also think it's a good idea to discuss the realization of that architectural style in the reference implementation, HTTP. Hugh
On 11/6/07, Etan Wexler <yahoo.com@...> wrote: > I've noticed that most of the discussion on rest-discuss is not about > REST, but about details of implementing HTTP. In large part, though, isn't REST *about* the details of implementing HTTP? Maybe that's just me.
Stephen Bannasch wrote to rest-discuss in "[rest-discuss] Re: restful way to create an association between 2 resources" on 6 November 2007: > I am wrestling with how to expose access to be able to manipulate > the many-to-many associations between two resources types when > these associations are both: > > 1) not properties of the original resource types > 2) implicitly created when the original resource types are created You write about types but I think that you are thinking of the resources that belong to those types. In other words: "I am wrestling with how to manipulate the many-to-many associations between two resources when these associations are not properties of the original resources and are implicitly created when the original resources are created." Am I correct? > For me[,] getting the uri form right relates to how I model the objects and express this modeling as RESTful resources. The URIs that you expose and the methods that your origin server permits on the corresponding resources are your model. Whatever you do behind the curtain to implement that model is your private business. You can let the latter rule the former, but why would you? > In this thread I'm working out a model for [RESTful] access to an element of my domain that I didn't see any existing design patterns for. In this thread you've gotten the suggestion to expose the associations as resources. Let me advance another suggestion, albeit one that I prefer less than I prefer exposing the associations as resources. Suppose that you have subject 5, <http://www.example/subject-5>; subject 20, <http://www.example/subject-20>; activity 18, <http://www.example/activity-18>; a list of activities related to subject 5, <http://www.example/subject-5/activities>; a list of activities related to subject 20, <http://www.example/subject-20/activities>; a list of subjects related to activity 18, <http://www.example/activity-18/subjects>. Suppose that you have a user agent which has RESTfully discovered the subjects, the activity, the lists, the relationship between subject 5 and its list of activities, the relationship between subject 20 and its list of activities, the relationship between activity 18 and its list of subjects. Suppose that the representations of the subjects and of the activity have an expiration date that allows the relationships to hold throughout the examples to come. Consider the following HTTP exchange between the user agent and the origin server listening on TCP port 80 of Internet host www.example. GET http://www.example/subject-5/activities HTTP/1.1 Host: www.example HTTP/1.1 200 OK Date: Tue, 06 Nov 2007 12:00:00 GMT Allow: HEAD, GET, PUT, POST, TRACE Content-Type: text/uri-list; charset=UTF-8 Content-Length: 0 In the foregoing HTTP exchange, the user agent discovers that subject 5 has no related activities. Consider the following HTTP exchange between the user agent and the origin server listening on TCP port 80 of Internet host www.example. GET http://www.example/subject-20/activities HTTP/1.1 Host: www.example HTTP/1.1 200 OK Date: Tue, 06 Nov 2007 12:00:01 GMT Allow: HEAD, GET, PUT, POST, TRACE Content-Type: text/uri-list; charset=UTF-8 Content-Length: 0 In the foregoing HTTP exchange, the user agent discovers that subject 20 has no related activities. Consider the following HTTP exchange between the user agent and the origin server listening on TCP port 80 of Internet host www.example. GET http://www.example/activity-18/subjects HTTP/1.1 Host: www.example HTTP/1.1 200 OK Date: Tue, 06 Nov 2007 12:00:02 GMT Allow: HEAD, GET, PUT, POST, TRACE Content-Type: text/uri-list; charset=UTF-8 Content-Length: 0 In the foregoing HTTP exchange, the user agent discovers that activity 18 has no related subjects. Consider the following HTTP exchange between the user agent and the origin server listening on TCP port 80 of Internet host www.example. POST http://www.example/subject-5/activities HTTP/1.1 Host: www.example Content-Type: text/uri-list; charset=UTF-8 Content-Length: 32 http://www.example/activity-18 HTTP/1.1 200 OK Date: Tue, 06 Nov 2007 12:00:03 GMT Allow: HEAD, GET, PUT, POST, TRACE Content-Location: http://www.example/subject-5/activities Content-Type: text/uri-list; charset=UTF-8 Content-Length: 32 http://www.example/activity-18 In the foregoing HTTP exchange, the user agent adds activity 18 to the list of activities that relate to subject 5. Consider the following HTTP exchange between the user agent and the origin server listening on TCP port 80 of Internet host www.example. GET http://www.example/activity-18/subjects HTTP/1.1 Host: www.example HTTP/1.1 200 OK Date: Tue, 06 Nov 2007 12:00:04 GMT Allow: HEAD, GET, PUT, POST, TRACE Content-Type: text/uri-list; charset=UTF-8 Content-Length: 30 http://www.example/subject-5 Eureka! In the foregoing HTTP exchange, the user agent discovers that activity 18 relates to subject 5. What the user agent does not know and does not need to know is that the origin server amended a list of subjects in reaction to an amendment to a list of activities. Consider the following HTTP exchange between the user agent and the origin server listening on TCP port 80 of Internet host www.example. POST http://www.example/activity-18/subjects HTTP/1.1 Host: www.example Content-Type: text/uri-list; charset=UTF-8 Content-Length: 31 http://www.example/subject-20 HTTP/1.1 200 OK Date: Tue, 06 Nov 2007 12:00:05 GMT Allow: HEAD, GET, PUT, POST, TRACE Content-Location: http://www.example/activity-18/subjects Content-Type: text/uri-list; charset=UTF-8 Content-Length: 61 http://www.example/subject-5 http://www.example/subject-20 In the foregoing HTTP exchange, the user agent adds subject 20 to the list of subjects that relate to activity 18. Can you guess what comes next? Consider the following HTTP exchange between the user agent and the origin server listening on TCP port 80 of Internet host www.example. GET http://www.example/subject-20/activities HTTP/1.1 Host: www.example HTTP/1.1 200 OK Date: Tue, 06 Nov 2007 12:00:06 GMT Allow: HEAD, GET, PUT, POST, TRACE Content-Type: text/uri-list; charset=UTF-8 Content-Length: 32 http://www.example/activity-18 Eureka again! In the foregoing HTTP exchange, the user agent discovers that subject 20 relates to activity 18. What the user agent does not know and does not need to know is that the origin server amended a list of activities in reaction to an amendment to a list of subjects. Before listing the exchanges, I wrote that I prefer exposing the associations as resources. One reason for my preference is the ease of denying (removing, ending, ...) an association when the association is addressable. In the scenario that I prefer, denying an association requires nothing more than a "DELETE" request. This operation is O(1); repeating this operation is clean and easy. In the scenario that I elaborated, denying an association requires a "PUT" request to submit a list which excludes the indication of the target association. The operation is O(n) and involves race conditions, causing undue headache for all parties. -- Etan Wexler.
Karen wrote: > On 11/6/07, Etan Wexler <yahoo.com@...> wrote: >> I've noticed that most of the discussion on rest-discuss is not about >> REST, but about details of implementing HTTP. > > In large part, though, isn't REST *about* the details of implementing HTTP? > > Maybe that's just me. Nope, HTTP isn't the only RESTful protocol possible and not all HTTP matters relate to REST. That said, I have no beef with the topicality of the list (I'm notoriously bad as far as meandering off-topic goes, so I'm in no position to complain anyway). There's a more general matter though of people perhaps being led to think that the other HTTP matters we discuss here that aren't related to REST being REST. I think that was Yohanes Santoso's point and it was certainly mine when I wrote agreeing with him. Quite a few people do seem to think that URI design has much more direct importance to REST than it does, and while it's reasonable for people interested in REST to ALSO be interested in URI design, and hence to end up discussing it here, it's perhaps harmful to have such an emphasis on it that we muddy the waters further.
On 11/7/07, Jon Hanna <jon@...> wrote: > > In large part, though, isn't REST *about* the details of implementing HTTP? > Nope, HTTP isn't the only RESTful protocol possible and not all HTTP > matters relate to REST. Yes, but that's why I mitigated with "in large part." > it's reasonable for people interested in REST to ALSO be interested in > URI design, and hence to end up discussing it here, it's perhaps harmful > to have such an emphasis on it that we muddy the waters further. Probably a valid point. I shall try to remember to avoid using URI design as a shorthand for resource or representation design.
* Karen <karen.cravens@...> [2007-11-07 04:00]: > In large part, though, isn't REST *about* the details of > implementing HTTP? Err, there is a bit of confusion here. First of all, mostly we’re talking about how to *use* HTTP, not how to implement it; at least I doubt that the majority of people on this list is concerned about the best way to open sockets and poll them, parse headers, etc. Even if we move past that niggling point, though, casting the question instead as “in large part, though, isn't REST *about* the details of using HTTP?”, then this still isn’t right. REST is an architectural style – it’s *two* abstractions removed from actual applications. (One too many for most people, as Roy often remarks in saying that.) HTTP is an architecture; one level of abstraction closer. So if anything, it would have to be the other way around: Using HTTP well is largely about doing representational state transfer. But there are aspects to using HTTP well (such as ETags and friends) that don’t really have anything to do with REST at all (in any deeper sense than that more RESTful systems potentially benefit more from them). And coming back to the original question: URI design – or, more broadly stated, address design – is orthogonal to the transfer of state via representations. If you put all your emphasis on addresses, you get REST/RPC hybrids. Representational state transfer is all about representations, as the name implies, not about addresses. All that said: I personally don’t mind any of these topics on this list. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
All - I have posted a couple of responses to this mailing list, but they don't show up in the list of messages. Who do I contact about this?
--- In rest-discuss@yahoogroups.com, "Amaeze" <amaeze@...> wrote: > > > All - > > I have posted a couple of responses to this mailing list, but they don't > show up in the list of messages. Who do I contact about this? > Wow, this one just appeared. Go figure.
--- In rest-discuss@yahoogroups.com, Devdatta Akhawe <f2005125@...> wrote: > > Hey, > I am a noob too , so forgive any mistakes . > > > In my opinion, a much cleaner design is to create distinct (sub) > > resources for each state transition i.e. \orders\[order_id]\shipped > > and \orders\[order_id]\canceled then PUT indicating the need for the > > creation of the new state while updating the underlying resource. > > Are you actually creating a new state or are you just changing the state at > the server ? > > I believe POST to /orders/[order_id] is fine. What are we referring to as "state"? I am changing the internal representation of the "order" but I am also changing it's business state. > > The PUT would return (at least) a link to the next logical state - it > > could return links to all possible "next" states. A GET on these > > This can be done with a <link> element in what is returned. Alternatively , > the next logical state that you mention can also be returned, like the way > the ATOM Pub sends the URI of the newly created blog post when you POST a new > entry to it. > > > > For example, I could POST a "ship" instruction to \orders\[order_id] > > as the following: > > <state>ship</state> > > <qty>5</qty> > > > > However, if I want to "cancel" that same order using the same > > resource (URI), the body of my POST would change to: > > <state>cancel</state> > > <qty>0</qty> > > I think what you are raising here is the problem that you are sending the > verbs SHIP / CANCEL to the URI through the POST body. Note that this isn't a > constraint in REST , REST doesn't restrict you to use only the HTTP Verbs. > That is a constraint of HTTP, as in the HTTP writers think that these are all > the verbs you need for resources on the web. If you think that your > application needs more, it is perfectly RESTful to do so .. but don't use > HTTP. Not to sure how to respond to that. > > In my opinion, a much cleaner design is to create distinct (sub) > > resources for each state transition i.e. \orders\[order_id]\shipped > This is where you are going wrong. The URI points to resources / state not > state transitions. \order\[order_id]\shipped is not a resource, so it is > wrong to write a URI for it. /order/[order_id]/status is a resource, this > could be used. But what really is a resource - isn't it anything? If I created a resource \ordershipper and POST'd a document to it, what would make that any more of a resource? > > > An interesting result of such design is that it forces a designer to > > look at a system (and the entities in system) in the context of a > > state machine. State machines are powerful! > > Exactly. So a POST to the /order/order_id/ changes its state from shipped to > cancelled. What is wrong with this ? How do I communicate that I want to "cancel" the order as opposed to "ship"? > > Again, I am a newbie so I might be very wrong! but this is my understanding of > REST. > > > Regards > devdatta >
Hmm, sorry about that. It appears that either myself or my co-moderator accidentally deleted your message. As a spam fighting technique, the first posts of new members require moderation. You won't have this problem again. Mark. On 11/8/07, Amaeze <amaeze@...> wrote: > > All - > > I have posted a couple of responses to this mailing list, but they don't > show up in the list of messages. Who do I contact about this? > > > > > > Yahoo! Groups Links > > > > -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
In another thread I proposed using PUT'ing to the sub-resource of resource to indicate a change in the business state of a resource e.g. PUT \lightbulb\on would turn a light bulb on. I got some interesting responses to this. Another (RESTful?) solution could be to POST to a resource that is responsible for turning a lightbulb on. For example, POST \lightswitch sending in the light bulb and state (on/off) chosen. My challenge with this second solution is that the other verbs will not be used against this resource, in essence this resource is pretty much a "POST-only" resource. Is that ok?
I understand that REST is about design and not necessarily architecture but how do I implement a Resource-Oriented Architecture (ROA) RESTfully? I come for answers here? Do we need a new group for that? I hope not. --- In rest-discuss@yahoogroups.com, "A. Pagaltzis" <pagaltzis@...> wrote: > > * Karen karen.cravens@... [2007-11-07 04:00]: > > In large part, though, isn't REST *about* the details of > > implementing HTTP? > > Err, there is a bit of confusion here. > > First of all, mostly we’re talking about how to *use* HTTP, not > how to implement it; at least I doubt that the majority of people > on this list is concerned about the best way to open sockets and > poll them, parse headers, etc. > > Even if we move past that niggling point, though, casting the > question instead as “in large part, though, isn't REST *about* > the details of using HTTP?”, then this still isn’t right. REST > is an architectural style " it’s *two* abstractions removed from > actual applications. (One too many for most people, as Roy often > remarks in saying that.) HTTP is an architecture; one level of > abstraction closer. > > So if anything, it would have to be the other way around: > > Using HTTP well is largely about doing representational state > transfer. > > But there are aspects to using HTTP well (such as ETags and > friends) that don’t really have anything to do with REST at all > (in any deeper sense than that more RESTful systems potentially > benefit more from them). > > And coming back to the original question: URI design " or, more > broadly stated, address design " is orthogonal to the transfer > of state via representations. If you put all your emphasis on > addresses, you get REST/RPC hybrids. Representational state > transfer is all about representations, as the name implies, > not about addresses. > > All that said: > > I personally don’t mind any of these topics on this list. > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> >
"Amaeze" <amaeze@...> writes: > In another thread I proposed using PUT'ing to the sub-resource of > resource to indicate a change in the business state of a resource e.g. > PUT \lightbulb\on would turn a light bulb on. I got some interesting > responses to this. > > Another (RESTful?) solution could be to POST to a resource that is > responsible for turning a lightbulb on. For example, POST \lightswitch > sending in the light bulb and state (on/off) chosen. > > My challenge with this second solution is that the other verbs will not > be used against this resource, in essence this resource is pretty much a > "POST-only" resource. > > Is that ok? What do you mean? I'd expect to be able to do GET /lightswitch and know if it is in on or off position. ==> GET /lightswitch <== 200 OK <== <lightswitch><state>on</state></lightswitch> I'd also expect to be able to PUT /lightswitch and send in some directive to control it. ==> PUT /lightswitch ==> <lightswitch><state>off</state></lightswitch> <== 200 OK. Finally, depending on your domain rules, I'd think that I can do a successful DELETE against it which could, say, be the analogous of removing a real lightswitch, which may makes sense if you want a continously on light bulb or if you have no further need to control some circuit. YS. PS: '/' is the separator, not '\'.
> In another thread I proposed using PUT'ing to the sub-resource of > resource to indicate a change in the business state of a resource e.g. > PUT \lightbulb\on would turn a light bulb on. I got some interesting > responses to this. I wouldn't do that. It's my understanding that "on" is a either a meaningless resource or not a resource altogether. > Another (RESTful?) solution could be to POST to a resource that is > responsible for turning a lightbulb on. For example, POST \lightswitch > sending in the light bulb and state (on/off) chosen. This sounds better but you don't create another resource, you just update the status of the resource "lightswitch". Why don't you just PUT a representation of the resource lightswitch to update it? -- Lawrence, oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
Trivial examples always obscure the main point. :) --- In rest-discuss@yahoogroups.com, Yohanes Santoso <yahoo-rest-discuss@...> wrote: > > "Amaeze" amaeze@... writes: > > > In another thread I proposed using PUT'ing to the sub-resource of > > resource to indicate a change in the business state of a resource e.g. > > PUT \lightbulb\on would turn a light bulb on. I got some interesting > > responses to this. > > > > Another (RESTful?) solution could be to POST to a resource that is > > responsible for turning a lightbulb on. For example, POST \lightswitch > > sending in the light bulb and state (on/off) chosen. > > > > My challenge with this second solution is that the other verbs will not > > be used against this resource, in essence this resource is pretty much a > > "POST-only" resource. > > > > Is that ok? > > What do you mean? > > I'd expect to be able to do GET /lightswitch and know if it is in on > or off position. > > ==> GET /lightswitch > > <== 200 OK > <== <lightswitch><state>on</state></lightswitch> I agree > > > I'd also expect to be able to PUT /lightswitch and send in some > directive to control it. > > ==> PUT /lightswitch > ==> <lightswitch><state>off</state></lightswitch> > > <== 200 OK. Well this is where I have my problem. Your ultimately aim is to turn a light bulb on i.e. modifying the state of the light switch is just a means to an end. For you to know what should happen to the light bulb you would have to interrogate the contents of the PUT which IMHO means that you are passing method information. Am I wrong here? > > Finally, depending on your domain rules, I'd think that I can do a successful > DELETE against it which could, say, be the analogous of removing a > real lightswitch, which may makes sense if you want a continously on light > bulb or if you have no further need to control some circuit. > > > YS. > > PS: '/' is the separator, not '\'. >
--- In rest-discuss@yahoogroups.com, "Lawrence Oluyede" <l.oluyede@...> wrote: > > > In another thread I proposed using PUT'ing to the sub-resource of > > resource to indicate a change in the business state of a resource e.g. > > PUT \lightbulb\on would turn a light bulb on. I got some interesting > > responses to this. > > I wouldn't do that. It's my understanding that "on" is a either a > meaningless resource or not a resource altogether. But its a "state" of the resource, why can't it be modeled as a state? > > Another (RESTful?) solution could be to POST to a resource that is > > responsible for turning a lightbulb on. For example, POST \lightswitch > > sending in the light bulb and state (on/off) chosen. > > This sounds better but you don't create another resource, you just > update the status of the resource "lightswitch". > Why don't you just PUT a representation of the resource lightswitch to > update it? Because I am about to impact another resource i.e. light bulb. Additionally, for me to effectively indicate what exactly I want to happen to the light bulb I have to include "on" in the body of the PUT. > -- > Lawrence, oluyede.org - neropercaso.it > "It is difficult to get a man to understand > something when his salary depends on not > understanding it" - Upton Sinclair >
> Because I am about to impact another resource i.e. light bulb. > Additionally, for me to effectively indicate what exactly I want to > happen to the light bulb I have to include "on" in the body of the > PUT. What's the problem in putting the resource state in the body if all you want to do is *exactly* that? If you want to update a resource, you have to provide all the information the server needs to update the resource (eg. a full representation or a subset) -- Lawrence, oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
"Amaeze" <amaeze@...> writes: > But what really is a resource - isn't it anything? If I created a > resource \ordershipper and POST'd a document to it, what would make that > any more of a resource? It's anything that your implementation cares or depends on; and that depends on the context. Let's apply this to your lightswitch example in another email. Imagine if there are thousands of lightswitch, each controlling a pixel in a LED message board[1]. There is a software running on the board that will take in a text and figures out which switches to toggle so as to display the text on the board. To the board software, the lightswitches are important resources. But on a highway, there are many many such boards. All those boards are centrally managed by a controller software. The operator of the controller decides what text gets to send to which board. The controller software shouldn't care about the lightswitches within the boards. To it, the boards are the resources and there is no need to know the status of the individual lightswitches. In fact, for the sake of genericism, it should not know that there are lightswitches at all. It still has to work with a board using a technology that can display some text without toggling any lightswitches. YS. Footnotes: [1] http://ops.fhwa.dot.gov/wz/technologies/michigan/images/figure3.jpg Original document: http://ops.fhwa.dot.gov/wz/technologies/michigan/index.htm
--- In rest-discuss@yahoogroups.com, "Lawrence Oluyede" <l.oluyede@...> wrote: > > > Because I am about to impact another resource i.e. light bulb. > > Additionally, for me to effectively indicate what exactly I want to > > happen to the light bulb I have to include "on" in the body of the > > PUT. > > What's the problem in putting the resource state in the body if all > you want to do is *exactly* that? > If you want to update a resource, you have to provide all the > information the server needs to update the resource > (eg. a full representation or a subset) > If the the PUT had no side-effects, then I would see no problem with this. Let me give a business example. In the supply chain domain, an order can be "shipped", "canceled", "picked" etc, each of these operations have side effects - business meaning. Assuming that \orders\[order_id] is my resource how would I indicate that I wanted that order "picked" versus "canceled". Note that both "pick" and "cancel" also modify the internal state of an order just setting a light bulb on/off does. How do I model RESTfully, change in the state of the resource that has business meaning i.e. the change is more than a basic internal state change? Am I making sense? :) > -- > Lawrence, oluyede.org - neropercaso.it > "It is difficult to get a man to understand > something when his salary depends on not > understanding it" - Upton Sinclair >
In my mind the words "side effect" ring the POST bell. So we're back to square one. -- Lawrence, oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
> Let's apply this to your lightswitch example in another email. Imagine > if there are thousands of lightswitch, each controlling a pixel in a > LED message board[1]. There is a software running on the board that > will take in a text and figures out which switches to toggle so as to > display the text on the board. Definitely makes sense, but therein lies my point. If the only thing the board can do (or can happen to the board) is the toggling of switches then it's a POST(?) with the text to change too end of story. If I can manipulate in a different way how do I then distinguish between changing text and doing something else?
On Nov 8, 2007, at 5:20 PM, Amaeze wrote:
> > Let's apply this to your lightswitch example in another email.
> Imagine
> > if there are thousands of lightswitch, each controlling a pixel in a
> > LED message board[1]. There is a software running on the board that
> > will take in a text and figures out which switches to toggle so as
> to
> > display the text on the board.
>
> Definitely makes sense, but therein lies my point. If the only thing
> the board can do (or can happen to the board) is the toggling of
> switches then it's a POST(?) with the text to change too end of story.
>
It depends on how do you model this. POST creates resources, if the
test is a resource itself that's fine, but if the resource is the
board and the text has the role of board state data, then you'd PUT to
the board to update its state.
> If I can manipulate in a different way how do I then distinguish
> between changing text and doing something else?
>
You need to model your data in a resource-oriented manner, it depends.
Resources may be ad-hoc, may map clearly to a regular data-model, or
can be logical as /releseases/latest.tar.gz, or "abstract" as
transactions.
For example you may decide that
/board/{id}/leds/{x,y}
is a resource with an on/off flag... It's up to you. Constraints are:
ROA, correct usage HTTP verbs and HTTP status codes, etc. HTTP is
fixed, the ROA part is the one that is movable.
-- fxn
On Nov 8, 2007, at 8:06 AM, Amaeze wrote: > Assuming that \orders\[order_id] is my resource how would I indicate > that I wanted that order "picked" versus "canceled". Note that both > "pick" and "cancel" also modify the internal state of an order just > setting a light bulb on/off does. > > How do I model RESTfully, change in the state of the resource that has > business meaning i.e. the change is more than a basic internal state > change? I prefer to model something like this as two collection resources -- e.g. /cancelled-orders and /picked-orders -- with the state change as a "move" from one to the other. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
--- In rest-discuss@yahoogroups.com, Stefan Tilkov <stefan.tilkov@...> wrote: > > On Nov 8, 2007, at 8:06 AM, Amaeze wrote: > > > Assuming that \orders\[order_id] is my resource how would I indicate > > that I wanted that order "picked" versus "canceled". Note that both > > "pick" and "cancel" also modify the internal state of an order just > > setting a light bulb on/off does. > > > > How do I model RESTfully, change in the state of the resource that has > > business meaning i.e. the change is more than a basic internal state > > change? > > > I prefer to model something like this as two collection resources -- > e.g. /cancelled-orders and /picked-orders -- with the state change as > a "move" from one to the other. My slashes are all wrong. :) Thanks for correcting that. I think I like that better that /orders/[orderID]/canceled or /OrderPicker (making a verb a noun and very RPC-ish IMH). I will ponder. > Stefan > -- > Stefan Tilkov, http://www.innoq.com/blog/st/ >
"Amaeze" <amaeze@...> writes: > --- In rest-discuss@yahoogroups.com, "Lawrence Oluyede" <l.oluyede@...> > wrote: >> >> > Because I am about to impact another resource i.e. light bulb. >> > Additionally, for me to effectively indicate what exactly I want to >> > happen to the light bulb I have to include "on" in the body of the >> > PUT. >> >> What's the problem in putting the resource state in the body if all >> you want to do is *exactly* that? >> If you want to update a resource, you have to provide all the >> information the server needs to update the resource >> (eg. a full representation or a subset) >> > > If the the PUT had no side-effects, then I would see no problem with > this. What do you mean by 'side-effect'? All methods are allowed to have server-side side-effect. > How do I model RESTfully, change in the state of the resource that has > business meaning i.e. the change is more than a basic internal state > change? A PUT is allowed to have server-side side-effect. It is allowed to trigger a chain reaction of actions. When you PUT in cancellation, you are allowed to contact your suppliers, shippers, and accountants, and notify them that the order has been cancelled. YS.
--- In rest-discuss@yahoogroups.com, Yohanes Santoso <yahoo-rest- discuss@...> wrote: > > "Amaeze" <amaeze@...> writes: > > > --- In rest-discuss@yahoogroups.com, "Lawrence Oluyede" <l.oluyede@> > > wrote: > >> > >> > Because I am about to impact another resource i.e. light bulb. > >> > Additionally, for me to effectively indicate what exactly I want to > >> > happen to the light bulb I have to include "on" in the body of the > >> > PUT. > >> > >> What's the problem in putting the resource state in the body if all > >> you want to do is *exactly* that? > >> If you want to update a resource, you have to provide all the > >> information the server needs to update the resource > >> (eg. a full representation or a subset) > >> > > > > If the the PUT had no side-effects, then I would see no problem with > > this. > > What do you mean by 'side-effect'? All methods are allowed to have > server-side side-effect. > > > > How do I model RESTfully, change in the state of the resource that has > > business meaning i.e. the change is more than a basic internal state > > change? > > A PUT is allowed to have server-side side-effect. It is allowed to > trigger a chain reaction of actions. When you PUT in cancellation, you > are allowed to contact your suppliers, shippers, and accountants, and > notify them that the order has been cancelled. My issue wasn't really using PUT, it was the resource that was being PUT to. Yes PUTs can have side-effects, and in fact your implementation really should determine whether you expose something as PUT or POST (idempotence). > YS. >
> I prefer to model something like this as two collection resources -- > e.g. /cancelled-orders and /picked-orders -- with the state change as > a "move" from one to the other. Having thought about this some more I'm leaning towards PUT /shipped-orders/[orderId] over POST /shipped-orders/ since I know the order I would like to ship and shipping should be idempotent. However, I would model "pick" as POST /picked-orders because an order can picked several times in its "lifetime". Any thoughts on other things that could influence my decision?
* Stefan Tilkov <stefan.tilkov@...> [2007-11-08 17:50]: > I prefer to model something like this as two collection > resources -- e.g. /cancelled-orders and /picked-orders -- > with the state change as a "move" from one to the other. That fits nicely on the face of it… but it seems tricky to implement in HTTP to me. How is the move operation initiated? Do you adopt MOVE from WebDAV? Or do you use another verb – if so, to what URI and with what representation? And how is this communicated in hypermedia? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
"A. Pagaltzis" <pagaltzis@...> writes: > * Stefan Tilkov <stefan.tilkov@...> [2007-11-08 17:50]: >> I prefer to model something like this as two collection >> resources -- e.g. /cancelled-orders and /picked-orders -- >> with the state change as a "move" from one to the other. > > That fits nicely on the face of it… but it seems tricky to > implement in HTTP to me. How is the move operation initiated? > Do you adopt MOVE from WebDAV? Or do you use another verb – if > so, to what URI and with what representation? And how is this > communicated in hypermedia? Given that an order can only be in one of these bins: cancelled, picked, shipped, etc. at a time, why couldn't you just POST an order id to one of these bins to cause an implicit movement? YS
--- In rest-discuss@yahoogroups.com, Yohanes Santoso <yahoo-rest-discuss@...> wrote: > > "A. Pagaltzis" pagaltzis@... writes: > > > * Stefan Tilkov stefan.tilkov@... [2007-11-08 17:50]: > >> I prefer to model something like this as two collection > >> resources -- e.g. /cancelled-orders and /picked-orders -- > >> with the state change as a "move" from one to the other. > > > > That fits nicely on the face of it… but it seems tricky to > > implement in HTTP to me. How is the move operation initiated? > > Do you adopt MOVE from WebDAV? Or do you use another verb " if > > so, to what URI and with what representation? And how is this > > communicated in hypermedia? > > > Given that an order can only be in one of these bins: cancelled, > picked, shipped, etc. at a time, why couldn't you just POST an order > id to one of these bins to cause an implicit movement? That's essentially what I would do for the "operations" that could occur multiple times. Since "ship" should only occur once, I would PUT. > YS >
OK, to get away from HTTP, what is there in existing work in use of a REST-like service API over XMPP. I want to use XMPP as the routing protocol; to send data to things (processes) deployed on machines without fixed IP addresses (say on Ec2 or similar). I also want the option of using the same front end to talk to classic HTTP(S) accessible systems. What I may do is create a custom XMPP message, call it RestOperation, with the normal verbs+headers as for HTTP, routing the stuff to the back end which can treat it as a normal servlet request. The response would be packaged up and resent to the sender, with status code. the problem here is addressing. I'm going to talk to something like node1@jabberfarm from node2@jabberfarm. Everything sending a message needs a unique address to identify the sender. Everything receiving a message also needs a unique jabber id. Where do URI fit in here? I could have some URI like xmpp://node@jabberfarm/services/ntp/status ; and have node@jabberfarm handle the dispatch. For responses, well, I'd have to do something there with a unique address for every client, or one per process+dispatch on some URI in the response. The other option is I create lots and lots of jabber identies, which is ok for a private openfire server; doesnt scale to reusing third party relay servers (or should I call them ORBs :), like googletalk. Using a separate jabber ID for every resource stops me having a notion of hierarchical resources, but lets me do very dynamic stuff, in which resources are free to move around on the network. This is very ORBy, except I'd be staying with the usual verbs. thoughts? -steve
A. Pagaltzis wrote: > > * Stefan Tilkov <stefan.tilkov@... > <mailto:stefan.tilkov%40innoq.com>> [2007-11-08 17:50]: > > I prefer to model something like this as two collection > > resources -- e.g. /cancelled-orders and /picked-orders -- > > with the state change as a "move" from one to the other. > > That fits nicely on the face of it… but it seems tricky to > implement in HTTP to me. How is the move operation initiated? > Do you adopt MOVE from WebDAV? Or do you use another verb – if > so, to what URI and with what representation? And how is this > communicated in hypermedia? I think this is one of the big stumbling points for people trying to jump onto the REST bandwagon. How to handle non-trivial mutation operations. The GET (read) operations are pretty obvious. It would be really great to have a *publicly available* set of 'patterns' described for transaction models like this. Instead of the endless debate on the blogs and here :-) Or even just a non-trivial application scenario. Joe got close with his post on "RESTify DayTrader" http://bitworking.org/news/201/RESTify-DayTrader but I actually read different semantics for DayTrader. And the data wasn't described at all. -- Patrick Mueller http://muellerware.org
Amaeze wrote: > If the the PUT had no side-effects, then I would see no problem with > this. Side-effects do not affect PUT's idempotency. What is required for PUT is that if you were to do it twice - without doing anything in between - that it would have the same basic effect as if you'd done it once. Whether that effect affects other resources or not, is not an issue. (Conceptually, one can always imagine a resource that is part of any other resource, so one could always conceive of any PUT operation as having a side-effect upon that resource).
--- In rest-discuss@yahoogroups.com, Jon Hanna <jon@...> wrote: > > Amaeze wrote: > > If the the PUT had no side-effects, then I would see no problem with > > this. > > Side-effects do not affect PUT's idempotency. > > What is required for PUT is that if you were to do it twice - without > doing anything in between - that it would have the same basic effect as > if you'd done it once. Whether that effect affects other resources or > not, is not an issue. > > (Conceptually, one can always imagine a resource that is part of any > other resource, so one could always conceive of any PUT operation as > having a side-effect upon that resource). > Understood. So ultimately if the client is making the call on the "internal" state of a resource in non-relative terms, then PUT otherwise POST.
> I think this is one of the big stumbling points for people trying to > jump onto the REST bandwagon. How to handle non-trivial mutation > operations. The GET (read) operations are pretty obvious. It would be > really great to have a *publicly available* set of 'patterns' described > for transaction models like this. Instead of the endless debate on the > blogs and here :-) > > Or even just a non-trivial application scenario. Joe got close with his > post on "RESTify DayTrader" > > http://bitworking.org/news/201/RESTify-DayTrader > > but I actually read different semantics for DayTrader. And the data > wasn't described at all. > > -- > Patrick Mueller > http://muellerware.org > I could not agree any less. :)
Hi, I am writing a hyperdata web browser client, and was wondering if there were client side java libraries to cache http responses, so that I could ask the local cache for a URL and it would return me the cached version if there were no reason to get something fresh from the web, or go out and get something on the web. It would be nice if it knew everything about HTTP redirects. I would also like to be able to browse the cache easily for debugging purposes. I would like to be able to pull up a window of of cache history, find the last request made, and be able to trace all the request that were made as part of a request. It is important to be able to browse the message exchanges easily so as to make it easy to debug all of this. Henry Home page: http://bblfish.net/ Sun Blog: http://blogs.sun.com/bblfish/ Foaf name: http://bblfish.net/people/henry/card#me
On Nov 8, 2007, at 1:39 PM, A. Pagaltzis wrote: > * Stefan Tilkov <stefan.tilkov@...> [2007-11-08 17:50]: > > I prefer to model something like this as two collection > > resources -- e.g. /cancelled-orders and /picked-orders -- > > with the state change as a "move" from one to the other. > > That fits nicely on the face of it but it seems tricky to > implement in HTTP to me. How is the move operation initiated? > Do you adopt MOVE from WebDAV? Or do you use another verb if > so, to what URI and with what representation? And how is this > communicated in hypermedia? > This was not exactly unexpected :-) I think it depends on which trade- off is more acceptable to me: building something that only works if both /cancelled-orders and /picked-orders are held by the same server, or incurring a bit of overhead. In the first scenario, I could POST a representation containing the URI to the order to be cancelled (let's say /submitted/123) to / cancelled. The server would internally move the resource and return the location of the "new" (cancelled) order. In the second scenario, I'd also do a POST with the URI to the original to /cancelled. The server would retrieve the order representation with GET, store it internally, DELETE the old one, and return the location of the new one. As the original POST contains the URI of the order to be moved it can be made idempotent (assuming the server knows both the old and new URI). On Nov 9, 2007, at 7:11 AM, Patrick Mueller wrote: > > I think this is one of the big stumbling points for people trying to > jump onto the REST bandwagon. How to handle non-trivial mutation > operations. The GET (read) operations are pretty obvious. It would be > really great to have a *publicly available* set of 'patterns' > described > for transaction models like this. Instead of the endless debate on the > blogs and here :-) > +1 - that would be really excellent. Some candidates: - resource movement - long-runnning business transactions - user activity tracking - queries - paging result sets Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Hi Henry, Story Henry wrote: > Hi, > > I am writing a hyperdata web browser client, and was wondering if there > were client side java libraries Could you achieve the same (or similar anyway) effect by running a caching HTTP server to perform the actual caching, and just make your regular HTTP requests of the local server, which would then decide whether to use the local version or make a network request? Instead of a (Java) API call you'd be making HTTP calls only. - John > to cache http responses, so that I > could ask the local cache for a URL and it would return me the cached > version if there were no reason to get something fresh from the web, or > go out and get something on the web. It would be nice if it knew > everything about HTTP redirects. > > I would also like to be able to browse the cache easily for debugging > purposes. I would like to be able to pull up a window of of cache > history, find the last request made, and be able to trace all the > request that were made as part of a request. > > It is important to be able to browse the message exchanges easily so as > to make it easy to debug all of this. > > Henry > > Home page: http://bblfish.net/ > Sun Blog: http://blogs.sun.com/bblfish/ > Foaf name: http://bblfish.net/people/henry/card#me > > >
Doesn't the JDK HTTP client support caching? On 11/9/07, Story Henry <henry.story@...> wrote: > Hi, > > I am writing a hyperdata web browser client, and was wondering if > there were client side java libraries to cache http responses, so > that I could ask the local cache for a URL and it would return me the > cached version if there were no reason to get something fresh from the > web, or go out and get something on the web. It would be nice if it > knew everything about HTTP redirects. > > I would also like to be able to browse the cache easily for debugging > purposes. I would like to be able to pull up a window of of cache > history, find the last request made, and be able to trace all the > request that were made as part of a request. > > It is important to be able to browse the message exchanges easily so > as to make it easy to debug all of this. > > Henry > > Home page: http://bblfish.net/ > Sun Blog: http://blogs.sun.com/bblfish/ > Foaf name: http://bblfish.net/people/henry/card#me > > > > >
[ Attachment content not displayed ]
Home page: http://bblfish.net/ Sun Blog: http://blogs.sun.com/bblfish/ Foaf name: http://bblfish.net/people/henry/card#me On 9 Nov 2007, at 18:11, John Kemp wrote: > Hi Henry, > > Story Henry wrote: > > Hi, > > > > I am writing a hyperdata web browser client, and was wondering if > there > > were client side java libraries > > Could you achieve the same (or similar anyway) effect by running a > caching HTTP server to perform the actual caching, and just make your > regular HTTP requests of the local server, which would then decide > whether to use the local version or make a network request? Instead > of a > (Java) API call you'd be making HTTP calls only. > > - John > > > to cache http responses, so that I > > could ask the local cache for a URL and it would return me the > cached > > version if there were no reason to get something fresh from the > web, or > > go out and get something on the web. It would be nice if it knew > > everything about HTTP redirects. > > > > I would also like to be able to browse the cache easily for > debugging > > purposes. I would like to be able to pull up a window of of cache > > history, find the last request made, and be able to trace all the > > request that were made as part of a request. > > > > It is important to be able to browse the message exchanges easily > so as > > to make it easy to debug all of this. > > > > Henry > > > > Home page: http://bblfish.net/ > > Sun Blog: http://blogs.sun.com/bblfish/ > > Foaf name: http://bblfish.net/people/henry/card#me > > > > > > > > >
On 11 Nov 2007, at 00:51, Subbu Allamaraju wrote: > On Nov 10, 2007 12:21 PM, Brandon Carlson <bcarlso@...> wrote: > > > Doesn't the JDK HTTP client support caching? > No. > I think Brandon may have been thinking of the fact that you can pass a couple of parameters on JVM startup to specify the cache to use. If you do this, all calls to the url.getURLConnection() will then pass those calls through the specified cache. see: http://www.rgagnon.com/javadetails/java-0085.html On 9 Nov 2007, at 18:11, John Kemp wrote: > Could you achieve the same (or similar anyway) effect by running a > caching HTTP server to perform the actual caching, and just make your > regular HTTP requests of the local server, which would then decide > whether to use the local version or make a network request? Instead > of a > (Java) API call you'd be making HTTP calls only. > Thanks John and Subbu for this idea. It does seem like a good way to get going. As more and more applications become web enabled, one even wonders if this should not come with the operating system... Does anyone know of a small proxy cache for clients written in java? My client probably will need to keep close track of all the headers and redirects of the call, so that it can help people debug the services they encounter on the web, and also because it will be useful to know which things are information resources and which are not. For example Tim Bray is not an information resource - though he is very resourceful source of information - as you can see by querying curl -i -L -H "Accept: application/rdf+xml" http://dbpedia.org/resource/Tim_Bray Henry > On 11/9/07, Story Henry < henry.story@...> wrote: > > Hi, > > > > I am writing a hyperdata web browser client, and was wondering if > > there were client side java libraries to cache http responses, so > > that I could ask the local cache for a URL and it would return me > the > > cached version if there were no reason to get something fresh from > the > > web, or go out and get something on the web. It would be nice if it > > knew everything about HTTP redirects. > > > > I would also like to be able to browse the cache easily for > debugging > > purposes. I would like to be able to pull up a window of of cache > > history, find the last request made, and be able to trace all the > > request that were made as part of a request. > > > > It is important to be able to browse the message exchanges easily so > > as to make it easy to debug all of this. > > > > Henry > > > [snip the never ending appended messages]
On 11 Nov 2007, at 10:47, Story Henry wrote: > Does anyone know of a small proxy cache for clients written in java? The following restlet issue has some very useful pointers on the subject http://restlet.tigris.org/issues/show_bug.cgi?id=25 So what I am looking for according to http://www.mnot.net/cache_docs/ is either a proxy cache or a browser cache. (Searching for "browser cache java" on Google is a hopeless exercise.) There are a number of open source caching libraries http://java-source.net/open-source/cache-solutions but these seem designed for object caching, not client side HTTP caching. Perhaps one of them could be useful for building such a client side cache... It is interesting to see how Mozilla does things: http://www.mozilla.org/projects/netlib/http/http-caching-faq.html though a more up to date version would be nice. The restlet issue has more interesting pointers, no need to duplicate them here. Conclusion: I have not yet found a ready made browser cache library for Java, but I have good pointers. Henry P.S. On the same subject, I have often wondered if there was a good cache one could use for presentations. It would be nice to be able to have a local cache in case the network goes down, that would return only cached representations when the network is down.
Henry, That is what I was thinking. I knew there was something in WRT caching in there, but I've never used it so I wasn't sure exactly what it was. Thanks! Brandon On 11/11/07, Story Henry <henry.story@...> wrote: > On 11 Nov 2007, at 00:51, Subbu Allamaraju wrote: > > On Nov 10, 2007 12:21 PM, Brandon Carlson <bcarlso@...> wrote: > > > > > Doesn't the JDK HTTP client support caching? > > No. > > > I think Brandon may have been thinking of the fact that you can pass a > couple of > parameters on JVM startup to specify the cache to use. If you do this, > all calls > to the url.getURLConnection() will then pass those calls through the > specified cache. > see: > http://www.rgagnon.com/javadetails/java-0085.html > > On 9 Nov 2007, at 18:11, John Kemp wrote: > > Could you achieve the same (or similar anyway) effect by running a > > caching HTTP server to perform the actual caching, and just make your > > regular HTTP requests of the local server, which would then decide > > whether to use the local version or make a network request? Instead > > of a > > (Java) API call you'd be making HTTP calls only. > > > > Thanks John and Subbu for this idea. It does seem like a good way to > get going. > As more and more applications become web enabled, one even wonders if > this should > not come with the operating system... > > Does anyone know of a small proxy cache for clients written in java? > > My client probably will need to keep close track of all the headers > and redirects of the call, so that it > can help people debug the services they encounter on the web, and also > because it will be useful to know which things are information > resources and which are not. For example Tim Bray is not an > information resource - though he is very resourceful source of > information - as you can see by querying > > curl -i -L -H "Accept: application/rdf+xml" http://dbpedia.org/resource/Tim_Bray > > Henry > > > > On 11/9/07, Story Henry < henry.story@...> wrote: > > > Hi, > > > > > > I am writing a hyperdata web browser client, and was wondering if > > > there were client side java libraries to cache http responses, so > > > that I could ask the local cache for a URL and it would return me > > the > > > cached version if there were no reason to get something fresh from > > the > > > web, or go out and get something on the web. It would be nice if it > > > knew everything about HTTP redirects. > > > > > > I would also like to be able to browse the cache easily for > > debugging > > > purposes. I would like to be able to pull up a window of of cache > > > history, find the last request made, and be able to trace all the > > > request that were made as part of a request. > > > > > > It is important to be able to browse the message exchanges easily so > > > as to make it easy to debug all of this. > > > > > > Henry > > > > > [snip the never ending appended messages] > > >
[ Attachment content not displayed ]
On Nov 9, 2007, at 3:27 PM, Rajith Attapattu wrote:
> After closely following all the disucssion around REST I have a few
> questions on would like to clear my doubts.
>
> 1) One of the questions I have seen floating around is "how can I
> do a shopping cart application in a RESTFul way".
By defining a client-side cart that can be directly manipulated
on the client with products identified by the mark-up in catalog
sites, each with links to the cashier. That is the truly RESTful
way to do a shopping cart.
> Most people realize that you shouldn't use sessions bcos it
> violates the Stateless constraint.
>
> Now what some folks suggest is that when you do a PUThttp://abc.com/
> customer/1235/basket/
> (where the body contains a document that describes the items and
> quantities to add), you service that request and then you return a
> URL with the state encoded.
"some folks" don't know what they are talking about. Such a site
would use POST and message bodies, not state-specific URLs, and
plain old HTTP authentication for identifying the customer.
> I have the following questions on that.
> a) Now is this RESTful?
Only if the site told the client to do that.
> b) Also there so much information that can be encoded since a URL
> has a maximum length
There is no maximum length on a URL, but nobody would do that anyway.
> c) Anybody intercepting this URL may be able to decode information
> (if SSL is not used)
Irrelevant, the same is true of any solution.
> 2) Most people do RPC over HTTP (Ex. some of the examples given in
> JSR 311 looks more RPC and not RESTful)
Nonsense. Most people use HTTP to follow links in hypertext.
> What constraints in general does RPC/HTTP violate?
Hypertext as the engine of application state. There are other things
that RPC mechansisms are traditionally bad for (streaming, coupling,
etc.) but those are not constraints.
> Most RPC operations are stateful, so u can think they violate the
> stateless constraint. What else?
That is orthogonal (anyone can build a stateless RPC).
> Consider the following examples
>
> doAddition(int x, int y) - POST http://abc.com/doAdittion/ (body
> contains x & y)
> this simply does an addition, but it is stateless. However
> intuition tells me it's not RESTful.
> Why is that?? What have I misunderstood here?
How does the client learn what to do? That is hypertext.
> Here is another example that I don't understand.
> increaseLuminosity(x) POST http://abc.com/increaseLuminosity (body
> contains x)
> {
> I get the bulbs' state from a data base.
> I increase it.
> I persist the new state.
> }
> Now my service is stateless, however the database contains the state.
> Again I don't understand exactly what constraints are violated but
> intuition tells me it's not RESTful
It may be RESTful, but it is still a stupid design if it doesn't
respond with a representation of the new state. Feedback is good.
> 3) Reliability with RESTful interactions.
> Forgetting the security concerns for the time being consider the
> following example.
>
> I am trying to create a service account for a customer.
> PUT http://abc.com/customer/{id}/savings
That assumes you know the service URI for new savings accounts,
which means it isn't RESTful. A POST to /customer/ would be more
accurate, though in reality a new account service is not something
that the client would participate in --- try opening a bank account
at citibank for example. All the client does is provide information
for the account -- only the server needs to know why that information
is being provided, and they deliberately do not want it automated.
> Now the server process the request, but goes down before it can
> send the response. Since PUT is idempotent, the client will retry
> again until it gets a response.
> However when the server comes back up, there is no reconciliation
> process like you would get with WS-RM. So the client will always
> retry until it is successful.
Well, if you create a stupid design, it will do stupid things ...
Think of it instead as a series of individual POST requests that are
building up a combined resource that will eventually be a savings
account when finished. Each of those requests can include parameters
that perform the same role as an ETag -- basically, identifying the
client's view of the current state of the resource. Then, when a
request is repeated or a state-change lost, the server would see
that in the next request and tell the client to refresh its view
of the form before continuing to the next step.
> But if you do the following, where u want to add some money to your
> account.
> POST http://abc.com/customer/{id}/savings/ - the body contains the
> amount.
>
> Now if the server crashes after processing but before sending the
> response, or if the client crashes before getting the response, the
> client will retry again.
> Now POST is not idempotent and each retry will keep on adding money.
So don't write it that way. Automated recovery from missing responses
is a trivial problem -- just include a request number in the form.
> Is this a category of applications that REST is not suitable for?
> or else what is the correct form to use when building such
> application in a RESTful way.
>
> 4) Security with RESTful interactions.
> a) The above example naturally raise questions about security.
> b) SSL is only point-to-point, so if you have to go through
> multiple intermediaries, how would you ensure privacy, non
> repudiation ..etc ?
SSL tunnels through intermediaries. However, there is nothing stopping
anyone from defining an encrypted message exchange within RESTful
communication (it is just another media type) aside from the fact
that shared secret keys are not effective with the general public.
Likewise, HTTP authentication is completely extensible (witness the
most recently defined AWS auth scheme). So the answer to your question
is that it has nothing to do with REST.
....Roy
On Nov 10, 2007, at 12:27 AM, Rajith Attapattu wrote:
>
> Hi All,
>
> After closely following all the disucssion around REST I have a few
> questions on would like to clear my doubts.
>
> 1) One of the questions I have seen floating around is "how can I do
> a shopping cart application in a RESTFul way".
> Most people realize that you shouldn't use sessions bcos it violates
> the Stateless constraint.
>
> Now what some folks suggest is that when you do a PUT http://abc.com/customer/1235/basket/
> (where the body contains a document that describes the items and
> quantities to add), you service that request and then you return a
> URL with the state encoded.
That is an exceptionally weird design that I've never seen suggested
yet.
>
> I have the following questions on that.
> a) Now is this RESTful?
I would claim it's not.
>
> b) Also there so much information that can be encoded since a URL
> has a maximum length
In theory, there is no limit, in practice, there may be one. But I
think you rarely end up with URLs that are too long if you do a
RESTful design.
>
> c) Anybody intercepting this URL may be able to decode information
> (if SSL is not used)
>
> 2) Most people do RPC over HTTP
Hm, I think I disagree, but even if it were true - so what?
> (Ex. some of the examples given in JSR 311 looks more RPC and not
> RESTful)
Could you be more specific? Which examples?
>
> What constraints in general does RPC/HTTP violate?
Nearly all of them, but it of course depends on the particular RPC-
style HTTP usage we're talking about. Often URLs are not used to
identify resources, communication is stateful, the interface is not
uniform, hypermedia is not used, and the meaning of the verbs is
violated.
>
> Most RPC operations are stateful, so u can think they violate the
> stateless constraint.
I think you're mixing communication state and resource (backend)
state ...
> What else?
> Consider the following examples
>
> doAddition(int x, int y) - POST http://abc.com/doAdittion/ (body
> contains x & y)
> this simply does an addition, but it is stateless. However intuition
> tells me it's not RESTful.
> Why is that?? What have I misunderstood here?
It seems perfectly RESTful to me - there doesn't seem to be any
reasonable resource state. The URL smells of RPC, but it's just a
string from a REST perspective. I'd feel a little better if it were /
addition instead of /doAddition. In fact, I'd even consider using GET
as the operation is both safe and idempotent.
>
>
> Here is another example that I don't understand.
> increaseLuminosity(x) POST http://abc.com/increaseLuminosity (body
> contains x)
> {
> I get the bulbs' state from a data base.
> I increase it.
> I persist the new state.
> }
> Now my service is stateless, however the database contains the state.
Which is OK - the constraint says to avoid communication state, not
resource state.
>
> Again I don't understand exactly what constraints are violated but
> intuition tells me it's not RESTful
>
> 3) Reliability with RESTful interactions.
> Forgetting the security concerns for the time being consider the
> following example.
>
> I am trying to create a service account for a customer.
> PUT http://abc.com/customer/{id}/savings
>
> Now the server process the request, but goes down before it can send
> the response. Since PUT is idempotent, the client will retry again
> until it gets a response.
> However when the server comes back up, there is no reconciliation
> process like you would get with WS-RM. So the client will always
> retry until it is successful.
>
> But if you do the following, where u want to add some money to your
> account.
> POST http://abc.com/customer/{id}/savings/ - the body contains the
> amount.
>
> Now if the server crashes after processing but before sending the
> response, or if the client crashes before getting the response, the
> client will retry again.
> Now POST is not idempotent and each retry will keep on adding money.
>
> Is this a category of applications that REST is not suitable for? or
> else what is the correct form to use when building such application
> in a RESTful way.
You either switch to PUT or make POST idempotent. See Joe Gregorio's
RESTifying DayTrader example:
http://bitworking.org/news/201/RESTify-DayTrader
>
>
> 4) Security with RESTful interactions.
> a) The above example naturally raise questions about security.
Does it?
>
> b) SSL is only point-to-point, so if you have to go through multiple
> intermediaries, how would you ensure privacy, non repudiation ..etc ?
You can of course use XML Encryption and Digital Signature with
RESTful HTTP, but that applies only if you're using XML. If SSL
doesn't suit your needs, you may have a problem.
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
[ Attachment content not displayed ]
Stefan,
Thanks for your answers. appreciate them very much.
Comments inline.
Rajith
On Nov 12, 2007 2:26 AM, Stefan Tilkov <stefan.tilkov@... > wrote:
>
>
>
>
>
>
>
> On Nov 10, 2007, at 12:27 AM, Rajith Attapattu wrote:
>
> >
> > Hi All,
> >
> > After closely following all the disucssion around REST I have a few
> > questions on would like to clear my doubts.
> >
> > 1) One of the questions I have seen floating around is "how can I do
> > a shopping cart application in a RESTFul way".
> > Most people realize that you shouldn't use sessions bcos it violates
> > the Stateless constraint.
> >
> > Now what some folks suggest is that when you do a PUT http://abc.com/customer/1235/basket/
> > (where the body contains a document that describes the items and
> > quantities to add), you service that request and then you return a
> > URL with the state encoded.
>
> That is an exceptionally weird design that I've never seen suggested
> yet.
>
I just read it on the following document.
http://simplewebservices.org/index.php?title=Shopping
(It maybe that I have misunderstood it.)
>
>
>
>
>
>
> >
> > I have the following questions on that.
> > a) Now is this RESTful?
>
> I would claim it's not.
That is my understanding as well. I asked the question to make sure I
undestood it properly.
>
>
> >
> > b) Also there so much information that can be encoded since a URL
> > has a maximum length
>
> In theory, there is no limit, in practice, there may be one. But I
> think you rarely end up with URLs that are too long if you do a
> RESTful design.
>
>
> >
> > c) Anybody intercepting this URL may be able to decode information
> > (if SSL is not used)
> >
> > 2) Most people do RPC over HTTP
>
> Hm, I think I disagree, but even if it were true - so what?
Sorry I should have been more specific. What I meant to say was that
most REST examples given by folks looks more RPC over HTTP than REST.
It look like I got distracted while typing that question and didn't
end up typing what I really wanted.
Sorry about that.
>
> > (Ex. some of the examples given in JSR 311 looks more RPC and not
> > RESTful)
>
> Could you be more specific? Which examples?
Stefan look at the calculator sample in the following url.
http://developers.sun.com/docs/web/swdp/r2/tutorial/doc/p36.html
To me it looks very RPC like.
>
>
> >
> > What constraints in general does RPC/HTTP violate?
>
> Nearly all of them, but it of course depends on the particular RPC-
> style HTTP usage we're talking about. Often URLs are not used to
> identify resources, communication is stateful, the interface is not
> uniform, hypermedia is not used, and the meaning of the verbs is
> violated.
So as a general rule, if a URL is used to identify an action
(operation) instead of a resource then we can say it is not RESTful?
This also seem to affect the uniform interface constraint as now u
have different operations defined instead of the standard interface.
>
>
> >
> > Most RPC operations are stateful, so u can think they violate the
> > stateless constraint.
>
> I think you're mixing communication state and resource (backend) state ...
I meant to say they are using sessions.
However I agree it's an overstatement to say "most" RPC is statefull.
There are lots of stateless (no communication state) RPC services out there.
>
>
> > What else?
> > Consider the following examples
> >
> > doAddition(int x, int y) - POST http://abc.com/doAdittion/ (body
> > contains x & y)
> > this simply does an addition, but it is stateless. However intuition
> > tells me it's not RESTful.
> > Why is that?? What have I misunderstood here?
>
> It seems perfectly RESTful to me - there doesn't seem to be any
> reasonable resource state. The URL smells of RPC, but it's just a
> string from a REST perspective. I'd feel a little better if it were /
> addition instead of /doAddition.
Stefan, doesn't the URL identify an action, instead of a resource? or
have I misunderstood this completely?
Even if we use /addition the url seems to refer to an action.
Doesn't the /xxx/addition/ imply that it refers to a heirarchy of resources?
To me it looks very RPCish.
> In fact, I'd even consider using GET
> as the operation is both safe and idempotent.
The operation is always safe and idempotent irrespective of whether we
use GET or POST.
Bcos no matter how many times I invoke it, all it does is compute x + y.
So as long as u use the same x and y value, the computation will
always return the same result.
> >
> > Here is another example that I don't understand.
> > increaseLuminosity(x) POST http://abc.com/increaseLuminosity (body
> > contains x)
> > {
> > I get the bulbs' state from a data base.
> > I increase it.
> > I persist the new state.
> > }
> > Now my service is stateless, however the database contains the state.
>
> Which is OK - the constraint says to avoid communication state, not
> resource state.
Stefan, for a shopping cart resource, what is communication state and
what is resource state?
What if I persist my cart after every add/delete/update of my cart to
the database instead of carrying it in my session?
This was the real question I wanted to ask (but my example was very poor).
Another point is that the URL refers to the action "
increaseLuminosity" and not a resource.
That still creates doubts in my mind.
Also from Roy's response I gather that it should return the resources new state.
>
> >
> > Again I don't understand exactly what constraints are violated but
> > intuition tells me it's not RESTful
> >
> > 3) Reliability with RESTful interactions.
> > Forgetting the security concerns for the time being consider the
> > following example.
> >
> > I am trying to create a service account for a customer.
> > PUT http://abc.com/customer/{id}/savings
> >
> > Now the server process the request, but goes down before it can send
> > the response. Since PUT is idempotent, the client will retry again
> > until it gets a response.
> > However when the server comes back up, there is no reconciliation
> > process like you would get with WS-RM. So the client will always
> > retry until it is successful.
> >
> > But if you do the following, where u want to add some money to your
> > account.
> > POST http://abc.com/customer/{id}/savings/ - the body contains the
> > amount.
> >
> > Now if the server crashes after processing but before sending the
> > response, or if the client crashes before getting the response, the
> > client will retry again.
> > Now POST is not idempotent and each retry will keep on adding money.
> >
> > Is this a category of applications that REST is not suitable for? or
> > else what is the correct form to use when building such application
> > in a RESTful way.
>
> You either switch to PUT or make POST idempotent. See Joe Gregorio's
Hmm this means the client keeps sending it multiple times, but it
would be nice (as Roy pointed out) if the server can tell the client
to refresh it's state.
Roy provided a nice solution by including a request number.
But how can you do it in a standard way with different clients
simillar to how WS-RM works.
Aren't you kind of building your own little protocol to handle the
requests in a relaible way by addition a request (or sequence number)
?
>
>
> >
> >
> > 4) Security with RESTful interactions.
> > a) The above example naturally raise questions about security.
>
> Does it?
See my comments below.
>
>
> >
> > b) SSL is only point-to-point, so if you have to go through multiple
> > intermediaries, how would you ensure privacy, non repudiation ..etc ?
>
> You can of course use XML Encryption and Digital Signature with
> RESTful HTTP, but that applies only if you're using XML. If SSL
> doesn't suit your needs, you may have a problem.
I know that you could use XML Encryption and Digital Signature ..etc,
but how do I do that in a standard way like what WS-* does?
(I am not advocating WS-* or trying to start the whole REST vs WS-*
debate here. I like to understand how these concerns are addressed)
In other words how can I do so in an interoperable way when I deal
with different clients? How can I let them know my policies ?
Also if you have intermediaries that you use different transports
between them this creates a problem
Ex. Client ----> HTTP ----> (firewall) ----> Service ------> JMS ---> backend
>
> Stefan
> --
> Stefan Tilkov, http://www.innoq.com/blog/st/
>
Rajith Attapattu wrote: > > > 1) One of the questions I have seen floating around is "how can I > > > do a shopping cart application in a RESTFul way". > > > > By defining a client-side cart that can be directly manipulated > > on the client with products identified by the mark-up in catalog > > sites, each with links to the cashier. That is the truly RESTful > > way to do a shopping cart. > > > This sounds good. However now the burden falls on the client to maintain the > cart and if the client crashes, it cannot restore the data unless it is > stored some where on the client. For a truly thin client this may or not be > a viable option. If the client is a browser, then it tends to store its URL history in persistent storage, so even if it crashes you can go back to the last state of your basket. Of course, this is only valid if your application implements the basket using GET rather than POST. I would argue that using GET is the only sensible way to do it for browsers, even if it leads to unwieldy looking URLs, as users inevitably use the back button and expect to return to the previous state. Only the last part (payment/confirmation) of the process should use POST. If the client is not a browser, it should be designed to recover from crashes in much the same way. -- Chris Burdess
On 11/12/07, Rajith Attapattu <rajith77@...> wrote:
> > > (Ex. some of the examples given in JSR 311 looks more RPC and not
> > > RESTful)
> > Could you be more specific? Which examples?
> Stefan look at the calculator sample in the following url.
> http://developers.sun.com/docs/web/swdp/r2/tutorial/doc/p36.html
I liked this bit of the example.. perhaps it says something about the JSR:
return new String(Integer.toString(sum));
(hint: Integer.toString() obviously returns a String)
But what do you think makes that example unrestful? As long as we
cheat and accept the WADL as the hypermedia document describing which
URIs to access and how to fill in the template, and we don't care
about the useless 'restbean/' part of the URI (which doesn't make it
unrestful, just ugly), it is a stateless representation of the
addition of two number. If you ask again with the same parameters, the
representation will even be the same (no matter if it was calculated
on the fly).
Of course an API tutorial should encourage RESTful practices instead
of showing calculator examples, but given that the JSR authors seems
more occupied with copying Strings (more examples on
http://developers.sun.com/docs/web/swdp/r2/tutorial/doc/p37.html) I
wouldn't expect too much..
--
Stian Siland You stick to the floor not because gravity is
Manchester, UK pulling you down, but because that is the shortest
http://soiland.no/ distance between today and tomorrow. [Wikipedia]
=/\=
On Nov 13, 2007, at 12:33 AM, Rajith Attapattu wrote:
> On Nov 12, 2007 2:26 AM, Stefan Tilkov <stefan.tilkov@... >
> wrote:
> >
> >
> > On Nov 10, 2007, at 12:27 AM, Rajith Attapattu wrote:
> >
> > >
> > > Now what some folks suggest is that when you do a PUT http://abc.com/customer/1235/basket/
> > > (where the body contains a document that describes the items and
> > > quantities to add), you service that request and then you return a
> > > URL with the state encoded.
> >
> > That is an exceptionally weird design that I've never seen suggested
> > yet.
> >
>
> I just read it on the following document.
> http://simplewebservices.org/index.php?title=Shopping
> (It maybe that I have misunderstood it.)
>
>
After a quick read, I think you have understood it correctly. Maybe
I'm missing something, but I've never seen this alternative before,
and I can't convince myself to like it. (In case others haven't read
this page: it suggests encoding information such as "# of items of
product 1: 3, # of items of product 2: 5" in a string and append it to
the URI so that it itself carries the state.)
> > > 2) Most people do RPC over HTTP
> >
> > Hm, I think I disagree, but even if it were true - so what?
> Sorry I should have been more specific. What I meant to say was that
> most REST examples given by folks looks more RPC over HTTP than REST.
> It look like I got distracted while typing that question and didn't
> end up typing what I really wanted.
> Sorry about that.
>
>
Maybe we're looking at different examples -- I always found most of
the examples given by REST folks quite RESTful :-)
Have you seen Joe's articles on the topic?
http://www.xml.com/pub/au/225
> >
> > > (Ex. some of the examples given in JSR 311 looks more RPC and not
> > > RESTful)
> >
> > Could you be more specific? Which examples?
> Stefan look at the calculator sample in the following url.
> http://developers.sun.com/docs/web/swdp/r2/tutorial/doc/p36.html
>
> To me it looks very RPC like.
>
>
That's the same one as the one we discussed below -- I believe it's
actually RESTful, although not a good example. In all fairness, I
think in this case it's not used as an example for REST, but rather as
an example of how to use the API.
> >
> >
> > >
> > > What constraints in general does RPC/HTTP violate?
> >
> > Nearly all of them, but it of course depends on the particular RPC-
> > style HTTP usage we're talking about. Often URLs are not used to
> > identify resources, communication is stateful, the interface is not
> > uniform, hypermedia is not used, and the meaning of the verbs is
> > violated.
>
> So as a general rule, if a URL is used to identify an action
> (operation) instead of a resource then we can say it is not RESTful?
>
Yes and no. It should make you suspicious, but it might be
accidentally RESTful: if you just ignore the characters in the URI,
could one argue that a resource is identified?
See http://www.markbaker.ca/blog//2005/04/14 (Mark's blog seems to be
down (?), search Google for "accidentally RESTful")
>
> This also seem to affect the uniform interface constraint as now u
> have different operations defined instead of the standard interface.
>
>
Yes, this may be the case if the "operation" in the URL is
incompatible with the HTTP verb used (example: GET http://example.com/1234/operation=delete)
> > > Consider the following examples
> > >
> > > doAddition(int x, int y) - POST http://abc.com/doAdittion/ (body
> > > contains x & y)
> > > this simply does an addition, but it is stateless. However
> intuition
> > > tells me it's not RESTful.
> > > Why is that?? What have I misunderstood here?
> >
> > It seems perfectly RESTful to me - there doesn't seem to be any
> > reasonable resource state. The URL smells of RPC, but it's just a
> > string from a REST perspective. I'd feel a little better if it
> were /
> > addition instead of /doAddition.
>
> Stefan, doesn't the URL identify an action, instead of a resource? or
> have I misunderstood this completely?
> Even if we use /addition the url seems to refer to an action.
>
It's just characters and words ... they don't matter from a REST
perspective. More philosophically, if I change the word to "sum", e.g.
http://example.com/sum?summand1=2&summand2=2
why is this ("the sum of 2 and 2") not not a resource?
>
> Doesn't the /xxx/addition/ imply that it refers to a heirarchy of
> resources?
>
No, I could have written the above as
http://example.com/sum/2/2
without changing anything from a RESTful perspective. It may be
unusual, but it's not "wrong".
>
> To me it looks very RPCish.
>
> > In fact, I'd even consider using GET
> > as the operation is both safe and idempotent.
>
> The operation is always safe and idempotent irrespective of whether we
> use GET or POST.
>
Yes, which is why GET is a better choice.
>
> Bcos no matter how many times I invoke it, all it does is compute x
> + y.
> So as long as u use the same x and y value, the computation will
> always return the same result.
>
Which aligns nicely with HTTP GET's caching support.
> > Which is OK - the constraint says to avoid communication state, not
> > resource state.
> Stefan, for a shopping cart resource, what is communication state and
> what is resource state?
>
That's your design decision.
>
> What if I persist my cart after every add/delete/update of my cart to
> the database instead of carrying it in my session?
>
Then it would typically become resource state. In fact, this is my
favorite approach to implementing a shopping cart - I have often
wanted to be able to send a link to my shopping cart to a colleague or
friend to take a look at it, but couldn't because it was built using
some idiotic session approach.
Considering that in "enterprisey" setups, products such as an
application server cluster persist state to the database anyway, I
don't see any wastefulness in this, either.
> > >
> > > Now if the server crashes after processing but before sending the
> > > response, or if the client crashes before getting the response,
> the
> > > client will retry again.
> > > Now POST is not idempotent and each retry will keep on adding
> money.
> > >
> > > Is this a category of applications that REST is not suitable
> for? or
> > > else what is the correct form to use when building such
> application
> > > in a RESTful way.
> >
> > You either switch to PUT or make POST idempotent. See Joe Gregorio's
> Hmm this means the client keeps sending it multiple times, but it
> would be nice (as Roy pointed out) if the server can tell the client
> to refresh it's state.
>
> Roy provided a nice solution by including a request number.
> But how can you do it in a standard way with different clients
> simillar to how WS-RM works.
>
You currently can't since there is no accepted standardized way for
reliable POST. Is there a need for one? I'm not sure -- maybe some
things should be left to the application, not everything needs to be
standardized. Then again, the same was true for dealing with
collections -- now we have a standard (with Atom and AtomPub).
>
> Aren't you kind of building your own little protocol to handle the
> requests in a relaible way by addition a request (or sequence number)
> ?
>
Yes.
> > > b) SSL is only point-to-point, so if you have to go through
> multiple
> > > intermediaries, how would you ensure privacy, non
> repudiation ..etc ?
> >
> > You can of course use XML Encryption and Digital Signature with
> > RESTful HTTP, but that applies only if you're using XML. If SSL
> > doesn't suit your needs, you may have a problem.
>
> I know that you could use XML Encryption and Digital Signature ..etc,
> but how do I do that in a standard way like what WS-* does?
> (I am not advocating WS-* or trying to start the whole REST vs WS-*
> debate here. I like to understand how these concerns are addressed)
> In other words how can I do so in an interoperable way when I deal
> with different clients? How can I let them know my policies ?
>
>
You can't. As to the need for standardization, see above.
Stefan
--
Stefan Tilkov, http://www.innoq.com/blog/st/
Is it safe to send 404 for unavailable representations of an existing resource? Let's suppose the resource is /foo GET /foo <= 200 OK GET /foo.xml <= 404 Not Found GET /foo Accept: application/xml <= 404 Not Found Is the right thing to do? It sounds a bit odd because the resource is actually there, but not with the representation the client wants, so I thought maybe there's a proper way to do so. Cheers -- Lawrence, oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
> It sounds a bit odd because the resource is actually there, but not
> with the representation the client wants, so I
> thought maybe there's a proper way to do so.
You should rather reply with "406 Not Acceptable":
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7.
Alas:
Note: HTTP/1.1 servers are allowed to return responses which are
not acceptable according to the accept headers sent in the
request. In some cases, this may even be preferable to sending a
406 response. User agents are encouraged to inspect the headers of
an incoming response to determine if it is acceptable.
Matthias
> You should rather reply with "406 Not Acceptable": > http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.7. > > Alas: > Note: HTTP/1.1 servers are allowed to return responses which are > not acceptable according to the accept headers sent in the > request. In some cases, this may even be preferable to sending a > 406 response. User agents are encouraged to inspect the headers of > an incoming response to determine if it is acceptable. Thank you. -- Lawrence, oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
On Tue, Nov 13, 2007 at 03:45:29PM +0100, Lawrence Oluyede wrote: > Is it safe to send 404 for unavailable representations of an existing resource? > > Let's suppose the resource is /foo > > GET /foo > <= 200 OK > > GET /foo.xml > <= 404 Not Found What you're requesting here is resource "/foo.xml", not xml representation of resource "/foo". I believe 404 is an appropriate response here. > GET /foo > Accept: application/xml > > <= 404 Not Found And in this case you should use 406, as explained by Matthias Ernst. Sergei
[ Attachment content not displayed ]
--- In rest-discuss@yahoogroups.com, "Subbu Allamaraju" <subbu.allamaraju@...> wrote: > > On 11/13/07, Chris Burdess dog@... wrote: > > > > Rajith Attapattu wrote: > > > > > 1) One of the questions I have seen floating around is "how can I > > > > > do a shopping cart application in a RESTFul way". > > > > > > > > By defining a client-side cart that can be directly manipulated > > > > on the client with products identified by the mark-up in catalog > > > > sites, each with links to the cashier. That is the truly RESTful > > > > way to do a shopping cart. > > > > > > > > > This sounds good. However now the burden falls on the client to maintain > > the > > > cart and if the client crashes, it cannot restore the data unless it is > > > stored some where on the client. For a truly thin client this may or not > > be > > > a viable option. > > > > If the client is a browser, then it tends to store its URL history in > > persistent storage, so even if it crashes you can go back to the last > > state of your basket. > > > > Of course, this is only valid if your application implements the basket > > using GET rather than POST. I would argue that using GET is the only > > sensible way to do it for browsers, even if it leads to unwieldy looking > > URLs, as users inevitably use the back button and expect to return to the > > previous state. Only the last part (payment/confirmation) of the process > > should use POST. > > > Some of this logic is true if the client is not a browser and can > persistently and reliably manage the state on the client side. If web > browsers are primary clients, I would argue that the shopping cart be > treated as a resource and use POST to manipulate its state (such as adding > and deleting items). In this model the state is managed by the server. > > Subbu > I think it boils down to whether the shopping cart eventually becomes resource state (the server stores) or it remains as application state (client only stores). One could argue that a shopping cart (and the items in it) are application state and that immediately you click "purchase" and order is created (which becomes resource state) and the shopping cart goes away. However if you want to expose a shopping cart as a resource, your interaction will be different. It ultimately boils down to what you really want to do.
[ Attachment content not displayed ]
On Tue, Nov 13, 2007 at 10:54:27AM +0100, Stefan Tilkov wrote: > On Nov 13, 2007, at 12:33 AM, Rajith Attapattu wrote: > > On Nov 12, 2007 2:26 AM, Stefan Tilkov <stefan.tilkov@... > > > wrote: > > > > > > > > > On Nov 10, 2007, at 12:27 AM, Rajith Attapattu wrote: > > > > > > > > > > > Now what some folks suggest is that when you do a PUT http://abc.com/customer/1235/basket/ > > > > (where the body contains a document that describes the items and > > > > quantities to add), you service that request and then you return a > > > > URL with the state encoded. > > > > > > That is an exceptionally weird design that I've never seen suggested > > > yet. > > > > > > > I just read it on the following document. > > http://simplewebservices.org/index.php?title=Shopping > > (It maybe that I have misunderstood it.) > > > > > > After a quick read, I think you have understood it correctly. Maybe > I'm missing something, but I've never seen this alternative before, > and I can't convince myself to like it. (In case others haven't read > this page: it suggests encoding information such as "# of items of > product 1: 3, # of items of product 2: 5" in a string and append it to > the URI so that it itself carries the state.) It looks odd at first glance, but I'm starting to like it. The particular URI encoding chosen is just a server-side implementation detail; they're opaque and the client doesn't need to know or care about them. The interesting thing about this implementation is that, instead of having a resource that represents a particular user's basket, which can change state over time, there are instead a possibly infinite number of non-user-specific baskets on the server, each of which has a unique URI. I'd say it's analogous to submitting an HTML form via GET. You end up at a new URI that identifies a resource. I don't see how this is any different from a URI like http://www.google.com/search?q=apples I don't see any constraints being violated. Resources are named by (opaque) URIs, the uniform interface is observed, everything is highly cacheable, the server doesn't know or care anything about the state of the client, and app state is firmly controlled by the client leveraging nothing but hypermedia. The example isn't finished; he doesn't show how you'd "check out", but I imagine you'd do that by POSTing or PUTing a basket representation somewhere. The hypermedia could contain forms for that too. -- Paul Winkler http://www.slinkp.com
I have an existing non-REST prototype web application which basically is a multi step application: 1st step: - the user uploads an XML file - there is some validation going on server-side - the valid file (a collection of items) is splitted in one-item-per-file - the resulting XML files are stored somewhere on the file system 2nd step: - the server presents to the user a page in which he can see what's going on (if the file was invalid, or if one of the items is) - the user can select which of the valid items can be stored by the server 3rd step: - there's a final report of what happened in the first 2 steps The problem is the 2nd step obviously depends on what happens in the 1st one and so on. The existing application stores a bunch of information in a in-memory session object (that I want to get rid of). My understanding is I have only one option to be RESTful: store the state information needed between the two steps client side (in a cookie?). The same kind of transaction must be doable with a non-browser user agent (I'm developing a Python client at the same time to test logic of the service). I have just read the Richardson's and Ruby's book which I find really, really enlightening but I don't know how to convert this kind of logic (REST transactions perhaps?) Cheers -- Lawrence, oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
Hi all, We have an application wherein we need to fetch a result set based on a filter criteria. The filter criteria cannot be specified as part of the URI as it is a fairly complex object that needs to get into the request body. So my request-line will be GET \data with the filter criteria coming in via the entity body. However, from various sources, I gathered that it is not recommended to send data as part of the entity body when using GET, DELETE and so on. However, I could not find this mentioned explicitly in the HTTP standard. If that is correct, then would it be advisable to model the Filter as a resource to which I POST my criteria and it then gives back the result set? The request-line then would be something like POST \filter with the filter criteria coming in via the entity body. Thanks in advance Suyog
Stefan Tilkov wrote: > On Nov 8, 2007, at 1:39 PM, A. Pagaltzis wrote: > > >> * Stefan Tilkov <stefan.tilkov@...> [2007-11-08 17:50]: >> >>> I prefer to model something like this as two collection >>> resources -- e.g. /cancelled-orders and /picked-orders -- >>> with the state change as a "move" from one to the other. >>> >> That fits nicely on the face of it� but it seems tricky to >> implement in HTTP to me. How is the move operation initiated? >> Do you adopt MOVE from WebDAV? Or do you use another verb � if >> so, to what URI and with what representation? And how is this >> communicated in hypermedia? >> >> > > This was not exactly unexpected :-) I think it depends on which trade- > off is more acceptable to me: building something that only works if > both /cancelled-orders and /picked-orders are held by the same server, > or incurring a bit of overhead. > > In the first scenario, I could POST a representation containing the > URI to the order to be cancelled (let's say /submitted/123) to / > cancelled. The server would internally move the resource and return > the location of the "new" (cancelled) order. > > In the second scenario, I'd also do a POST with the URI to the > original to /cancelled. The server would retrieve the order > representation with GET, store it internally, DELETE the old one, and > return the location of the new one. As the original POST contains the > URI of the order to be moved it can be made idempotent (assuming the > server knows both the old and new URI). > > Okay, apparently I need some more educatin'. While the value of being able to dereference canceled orders and shipped orders etc. is pretty obvious, my first instinct would be to make them read-only resources. To me the fact that an order is shipped, canceled, and so on is a property of the order resource and not some derived resource. And to change that state you would PUT a new representation to the existing order. That is, /orders represents _all_ orders. To create a new order, I POST to /orders. The server would tell me that my new order is available at /orders/123. When I retrieve that order it might look something like this (leaving out all angle-brackety looking things) item = ABC-123 quantity = 10 status = new If I wanted to cancel the order, I would change the state to "canceled" and PUT it back. I can even call OPTIONS on the resource first to see if it allows me to alter the order. For new orders, GET and PUT can be returned. For canceled or shipped orders, just GET is returned. And at any time I can GET /orders/new, /orders/canceled, /orders/shipped, etc. Trying to alter any of those resources returns 405 Method Not Allowed. Now, I can see making things like /canceled etc. writable, if your requirement was to treat an order cancellation as a thing unto itself (Wall St. probably works that way), and you wanted to allow users to track a cancellation and allow the business to report easily on cancellations. But that doesn't seem to work here, where an order can "move itself" into other states, like shipped. So, where did I go off the rails? - Pete
I'll have a try...
Lawrence Oluyede wrote:
> 1st step:
>
> - the user uploads an XML file
> - there is some validation going on server-side
> - the valid file (a collection of items) is splitted in one-item-per-file
> - the resulting XML files are stored somewhere on the file system
I'll assume that the steps after the upload are asynchronous. I would
model this as:
- User POSTs the XML file to /upload_job
- Server responds "201 Created" (or perhaps "202 Accepted") with a
Location: header set to /upload_job/{job_id}. For browser users, it
might also return a representation that says "Click here to see the
status of your job" along with a link to the the same job status URI.
> 2nd step:
>
> - the server presents to the user a page in which he can see what's
> going on (if the file was invalid, or if one of the items is)
> - the user can select which of the valid items can be stored by the server
User GETs the URI provided in the location above. If the asynchronous
process has completed, the server responds with a representation that
includes a list of the items found and their status (valid/invalid)
along with a FORM that lets the user choose which of the valid items to
store. If the job isn't complete the representation tells the user that.
>
> 3rd step:
>
> - there's a final report of what happened in the first 2 steps
After the user POSTS the form, the server responds with the final
report. In the future, GETting the status URI will return the same report.
> The problem is the 2nd step obviously depends on what happens in the
> 1st one and so on. The existing application
> stores a bunch of information in a in-memory session object (that I
> want to get rid of).
>
> My understanding is I have only one option to be RESTful: store the
> state information needed between the two steps client side (in a
> cookie?).
Here I've represented the state of the job as a new resource, which the
user can GET or POST to interact with the job.
Does that address your scenario, or did I miss some part of it?
Jim
Suyog wrote: > Hi all, > > If that is correct, then would it be > advisable to model the Filter as a resource to which I POST my criteria > and it then gives back the result set? Yes.
Peter Lacey <placey@...> writes: > If I wanted to cancel the order, I would change the state to "canceled" > and PUT it back. I can even call OPTIONS on the resource first to see > if it allows me to alter the order. For new orders, GET and PUT can be > returned. For canceled or shipped orders, just GET is returned. And at > any time I can GET /orders/new, /orders/canceled, /orders/shipped, etc. > Trying to alter any of those resources returns 405 Method Not Allowed. > > Now, I can see making things like /canceled etc. writable, if your > requirement was to treat an order cancellation as a thing unto itself > (Wall St. probably works that way), and you wanted to allow users to > track a cancellation and allow the business to report easily on > cancellations. But that doesn't seem to work here, where an order can > "move itself" into other states, like shipped. > > So, where did I go off the rails? There is nothing wrong with a collection changing in-between GETs. CANCELLED and SHIPPED bins are shared resources and unless a client can have exclusive control over them, it simply has to expect that they may change without its knowledge. YS.
On 11/14/07, Suyog <suyog.gaidhani@...> wrote: > Hi all, > > We have an application wherein we need to fetch a result set based on a > filter criteria. The filter criteria cannot be specified as part of the > URI as it is a fairly complex object that needs to get into the request > body. So my request-line will be GET \data with the filter criteria > coming in via the entity body. However, from various sources, I gathered > that it is not recommended to send data as part of the entity body when > using GET, DELETE and so on. However, I could not find this mentioned > explicitly in the HTTP standard. The spec permits it, but you could conceivably run into problems with deployed software and SDKs which don't. AFAIK, the possibility for its use in the wild is completely untested. > If that is correct, then would it be > advisable to model the Filter as a resource to which I POST my criteria > and it then gives back the result set? The request-line then would be > something like POST \filter with the filter criteria coming in via the > entity body. Yup. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Yohanes Santoso wrote: > Peter Lacey <placey@...> writes: > > >> If I wanted to cancel the order, I would change the state to "canceled" >> and PUT it back. I can even call OPTIONS on the resource first to see >> if it allows me to alter the order. For new orders, GET and PUT can be >> returned. For canceled or shipped orders, just GET is returned. And at >> any time I can GET /orders/new, /orders/canceled, /orders/shipped, etc. >> Trying to alter any of those resources returns 405 Method Not Allowed. >> >> Now, I can see making things like /canceled etc. writable, if your >> requirement was to treat an order cancellation as a thing unto itself >> (Wall St. probably works that way), and you wanted to allow users to >> track a cancellation and allow the business to report easily on >> cancellations. But that doesn't seem to work here, where an order can >> "move itself" into other states, like shipped. >> >> So, where did I go off the rails? >> > > There is nothing wrong with a collection changing in-between > GETs. CANCELLED and SHIPPED bins are shared resources and unless a client > can have exclusive control over them, it simply has to expect that > they may change without its knowledge. > > > YS. > > That I understand. I guess I wasn't clear. To restate. We have a resource called an order which can be in several states: new, shipped, canceled (I'll ignore the rest). That gives us at least three resource collections /new-orders, /canceled-orders, and /shipped-orders (or /orders, /orders/canceled, /orders/shipped or ...). Stefan recommends that if a user wants to cancel an order they should post the URI of the new (open) order to the /canceled-orders resource, and the server "moves" the order from the new bucket to the canceled bucket. Whereas my instinct is to model the order with a status field, GET the order, change the status, and PUT it back. Leaving /canceled-orders and /shipped-orders to be read-only, convenience resources. I can see two reasons for doing it Stefan's way. 1. If canceling and shipping an shipping an order are unique business processes. For instance, if it is required that a "cancel order" request be submitted (and thus be trackable), and the request itself is a resource, which can in turn be approved/disapproved, retrieved, etc. 2. To support, as Stefan noted, the ability to locate the various resources/services on separate servers without any backend dependencies (e.g., a database). Both of these are valid, but neither are automatic. So my question again, is there anything wrong with my seemingly straightforward means of altering the status of an order or is it just another, legitimate, way of doing things? - Pete
> [SUPER CUT] > > Does that address your scenario, or did I miss some part of it? Thanks a lot Jim, you cleared my mind a lot. -- Lawrence, oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
On Nov 14, 2007, at 8:59 PM, Peter Lacey wrote: > So my question > again, is there anything wrong with my seemingly straightforward means > of altering the status of an order or is it just another, legitimate, > way of doing things? AFAICT, we've entered the realm of design decisions instead of discussions about what's more "RESTful". Which is a good thing :-) I don't think there is anything wrong with your approach. Mine stems from a design where I needed a simple way to allow retrieving orders based on their status, so modeling things this way solved two problems at once. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Peter Lacey <placey@...> writes:
> We have a resource called an order which can be in several states: new,
> shipped, canceled (I'll ignore the rest). That gives us at least three
> resource collections /new-orders, /canceled-orders, and /shipped-orders
> (or /orders, /orders/canceled, /orders/shipped or ...). Stefan
> recommends that if a user wants to cancel an order they should post the
> URI of the new (open) order to the /canceled-orders resource, and the
> server "moves" the order from the new bucket to the canceled bucket.
> Whereas my instinct is to model the order with a status field, GET the
> order, change the status, and PUT it back. Leaving /canceled-orders and
> /shipped-orders to be read-only, convenience resources.
>
> I can see two reasons for doing it Stefan's way. 1. If canceling and
> shipping an shipping an order are unique business processes. For
> instance, if it is required that a "cancel order" request be submitted
> (and thus be trackable), and the request itself is a resource, which can
> in turn be approved/disapproved, retrieved, etc. 2. To support, as
> Stefan noted, the ability to locate the various resources/services on
> separate servers without any backend dependencies (e.g., a database).
> Both of these are valid, but neither are automatic. So my question
> again, is there anything wrong with my seemingly straightforward means
> of altering the status of an order or is it just another, legitimate,
> way of doing things?
I see nothing wrong; it's just another way. Perhaps others would chime
in?
For me, since you've deemed that the status is important enough to set
up the /{...}-orders hierarchies for reporting; since you already have
the bulk of the underlying mechanism to handle POST-ing to the
collections; then I'd just do it for completeness sake.
YS
--- In rest-discuss@yahoogroups.com, Peter Lacey <placey@...> wrote: > > Yohanes Santoso wrote: > > Peter Lacey placey@... writes: > > > > > >> If I wanted to cancel the order, I would change the state to "canceled" > >> and PUT it back. I can even call OPTIONS on the resource first to see > >> if it allows me to alter the order. For new orders, GET and PUT can be > >> returned. For canceled or shipped orders, just GET is returned. And at > >> any time I can GET /orders/new, /orders/canceled, /orders/shipped, etc. > >> Trying to alter any of those resources returns 405 Method Not Allowed. > >> > >> Now, I can see making things like /canceled etc. writable, if your > >> requirement was to treat an order cancellation as a thing unto itself > >> (Wall St. probably works that way), and you wanted to allow users to > >> track a cancellation and allow the business to report easily on > >> cancellations. But that doesn't seem to work here, where an order can > >> "move itself" into other states, like shipped. > >> > >> So, where did I go off the rails? > >> > > > > There is nothing wrong with a collection changing in-between > > GETs. CANCELLED and SHIPPED bins are shared resources and unless a client > > can have exclusive control over them, it simply has to expect that > > they may change without its knowledge. > > > > > > YS. > > > > > That I understand. I guess I wasn't clear. To restate. > > We have a resource called an order which can be in several states: new, > shipped, canceled (I'll ignore the rest). That gives us at least three > resource collections /new-orders, /canceled-orders, and /shipped-orders > (or /orders, /orders/canceled, /orders/shipped or ...). Stefan > recommends that if a user wants to cancel an order they should post the > URI of the new (open) order to the /canceled-orders resource, and the > server "moves" the order from the new bucket to the canceled bucket. > Whereas my instinct is to model the order with a status field, GET the > order, change the status, and PUT it back. Leaving /canceled-orders and > /shipped-orders to be read-only, convenience resources. > > I can see two reasons for doing it Stefan's way. 1. If canceling and > shipping an shipping an order are unique business processes. For > instance, if it is required that a "cancel order" request be submitted > (and thus be trackable), and the request itself is a resource, which can > in turn be approved/disapproved, retrieved, etc. 2. To support, as > Stefan noted, the ability to locate the various resources/services on > separate servers without any backend dependencies (e.g., a database). > Both of these are valid, but neither are automatic. So my question > again, is there anything wrong with my seemingly straightforward means > of altering the status of an order or is it just another, legitimate, > way of doing things? > > - Pete > Does resource have to match implementation under the covers? Couldn't I POST to /shipped-orders and it be represented internally as the "status" field being populated on the order? I would hope so. Having said that , as the original poster, I contemplated just PUT'ing the changed resource back to /orders/[orderID] however I didn't feel comfortable with the fact that I would have to interrogate the "status" field to actually know what "operation" I needed to perform (assume that shipping an order involves more than just changing the orders status e.g. sends an e-mail additionally while canceling does not ). Maybe I am missing something.
Hey, > from a design where I needed a simple way to allow retrieving orders Maybe I am getting this wrong, but wasn't that's Pete's point too? Pete said that in his design the present orders could be retreived as a read only resource from /cancelled_orders/ , which is exactly what your design does for retreiving the order. Only difference between your and Pete's design is how the change of state of an order is accomplished. BTW, I am totally with Pete's design , I think it is just more correct and simpler. >Having said that , as the original poster, I contemplated just PUT'ing >the changed resource back to /orders/[orderID] however I didn't feel >comfortable with the fact that I would have to interrogate the "status" >field to actually know what "operation" I needed to perform (assume that >shipping an order involves more than just changing the orders status >e.g. sends an e-mail additionally while canceling does not ). Maybe I >am missing something. How should this be a concern? Even now in Pete's design you have a list of cancelled orders and shipped orders. What he has done is that to change the state of the resource, you do a PUT on the actual orders and not on the list. The list is just a read only resource denoting which order is in which state. I guess Pete should clear it more or I am totally wrong in my understanding of his design. Thanks dev
> How should this be a concern? Even now in Pete's design you have a > list of cancelled orders and shipped orders. What he has done is that > to change the state of the resource, you do a PUT on the actual orders > and not on the list. The list is just a read only resource denoting > which order is in which state. Well maybe it shouldn't, however my implementation would have to do if state changed to ship, then send e-mail else if changed to cancel, then do nothing. To me this is no different (from a server-side implementation angle) that passing "cancel" or "ship" in the body of the HTTP method. Granted I will communicate the change differently, but I would still have to do the above logic. If I PUT/POST to a resource directly, then I know I want to "ship" and I don't have to "test" the state of the resource. Make sense?
Amaeze wrote: > If I PUT/POST to a resource directly, then I know I want to "ship" and I > don't have to "test" the state of the resource. > > Make sense? It does to me. Now that I've seen the reasoning behind this design from you, Yohanes, Stefan, and (off list) Mike Amundsen, I rather like it. To sum up, allowing writes to derived resources e.g. /shipped-orders, /canceled-orders, etc. allows: 1. Location independence of services 2. Modeling state changes as distinct resources (auditing) 3. Simpler code, as messages don't have to be inspected to derive intent. 4. Easier support for the hypertext constraint 5. Potentially better mapping to business processes Such an approach is not always warranted, but may be more useful than I at first imagined. -- Pete
On 11/9/07, Steve Loughran <steve.loughran.soapbuilders@...> wrote:
> I want to use XMPP as the routing protocol; to send data to things
> (processes) deployed on machines without fixed IP addresses (say on
> Ec2 or similar). I also want the option of using the same front end to
> talk to classic HTTP(S) accessible systems.
Firstly, did you look at http://code.google.com/p/xeerkat/ ? (I don't
know the project, but it's the second google hit for p2p rest)
I've been thinking about a similar setup for my (still quite
theoretical) distributed file system. However, I wouldn't want to use
XMPP as it still requires a server setup, and would like to use a true
p2p system such as JXTA or something simpled built on distributed hash
tables.
I agree that having a jabber account per resource would be very modern
and decoupled and all that. (or at least unique "to" addresses - maybe
there is a way to hack multiple addresses from a single account in
Jabber with some prefix or postfix? Something similar to how qmail can
be configured to receive emails to
youraccount-whateveryouwant@...)
If you use p2p groups you can just use some UUID (or even a virtual
HTTP URI) as the resource identifiers and submit it to the p2p message
group in a HTTP-like message:
PUT http://myapp.com/p2p/nodes/a87497f6-65b0-4c0d-a988-4f5f1ccdea88/ip4
Content-Type: text/plain
156.282.22.14:2331
could for instance be a way a node registered itself. (Coupled with
some appropriate signatures to prove that the PUT came from
a87497f6-65b0-4c0d-a988-4f5f1ccdea88). Note that there wouldn't
necessarily be anything on http://myapp.com/p2p/ - the URIs only serve
as identificators within the application. Requests would be sent out
on the p2p group similarly:
GET http://myapp.com/p2p/nodes/a87497f6-65b0-4c0d-a988-4f5f1ccdea88/last_seen
The nodes who according to say the distributed hash table knows about
a87497f6-65b0-4c0d-a988-4f5f1ccdea88 would then reply with an
appropriate representation of the resource, even if the node
a87497f6-65b0-4c0d-a988-4f5f1ccdea88 himself was now even offline.
Of course you could have resources that are living freely outside "nodes":
PUT http://myapp.com/p2p/notes/7f5fd142-af5f-4c2a-9395-7df4f51fdc04
Content-Type: text/plain
This is a note.
Depending on which p2p framework is used, it might or might not be
necessary to add some headers to identify that particular GET and POST
requests, who's requesting, and who's replying. In addition you would
have issues with authority, for instance it could be difficult for
anyone to say 404, because all except the responsible nodes wouldn't
know about the resource, and so you can't know if it's a real 404 or
just connection timed out. (which doesn't have a HTTP status code
ASFAK).
I'm not so sure how clever it is to use "http://" URIs in such an
application, as it would not distinguish between real HTTP URLs and
p2p-based ones, so a different protocol name is probably preferable,
for instance urn:myapp.com:p2p/nodes/a87497f6-65b0-4c0d-a988-4f5f1ccdea88
--
Stian Siland You stick to the floor not because gravity is
Manchester, UK pulling you down, but because that is the shortest
http://soiland.no/ distance between today and tomorrow. [Wikipedia]
=/\=
Amaezee Wrote:
> If I PUT/POST to a resource directly, then I know I want to "ship"
> and I don't have to "test" the state of the resource.
> Make sense?
What you say does make sense , and as Pete said that this has a large
number of implementation advantages too.
But I just can't shake of the nagging feeling that there is something
wrong here ...
In the Atom Pub RFC, there is a app:draft.. including this in the POST
that creates a resource means that the selected entry is not publicly
viewable, whereas absence of this in any PUT/POST implies the entry is
publicly visible.
The AtomPub rfc says that the EDIT request should be sent to the
member uri. I mean, there is no difference over here (in the URI)
between the drafts and the entries which are publicly visible. (this
is similar I hope to the cancelled_orders and shipped_orders , 2
different states of the resource). I don't do A PUT on
example.org/drafts/post1 to make it publicly visible . Similarly , I
shouldn't use a PUT on the list at /cancelled_orders or /shipped_orders.
imho.
I am not that experienced. If this is just a point where
implementation convenience over rides other things, then please tell
me so. I am just concerned that I may be making a mistake in my
understanding.
Regards
dev
* Amaeze <amaeze@...> [2007-11-15 01:00]: > Does resource have to match implementation under the covers? Not at all. > Couldn't I POST to /shipped-orders and it be represented > internally as the "status" field being populated on the order? > I would hope so. The way I understood Pete’s question is about how the resource structure would look to the outside world. Both approaches are valid and have each their own drawbacks and benefits. However the data is actually stored and modified internally is of no consequence to the question. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
--- In rest-discuss@yahoogroups.com, "bertie_wooster_funny" <f2005125@...> wrote: > > > > Amaezee Wrote: > > If I PUT/POST to a resource directly, then I know I want to "ship" > > > and I don't have to "test" the state of the resource. > > Make sense? > > > > What you say does make sense , and as Pete said that this has a large > number of implementation advantages too. > > But I just can't shake of the nagging feeling that there is something > wrong here ... > > In the Atom Pub RFC, there is a app:draft.. including this in the POST > that creates a resource means that the selected entry is not publicly > viewable, whereas absence of this in any PUT/POST implies the entry is > publicly visible. > > The AtomPub rfc says that the EDIT request should be sent to the > member uri. I mean, there is no difference over here (in the URI) > between the drafts and the entries which are publicly visible. (this > is similar I hope to the cancelled_orders and shipped_orders , 2 > different states of the resource). I don't do A PUT on > example.org/drafts/post1 to make it publicly visible . Similarly , I > shouldn't use a PUT on the list at /cancelled_orders or /shipped_orders. > > imho. > > I am not that experienced. If this is just a point where > implementation convenience over rides other things, then please tell > me so. I am just concerned that I may be making a mistake in my > understanding. > > Regards > dev > I'm not that familiar with the Atom Pub RFC (unfortunately) so I can't speak directly to that. However it would seem that app:draft controls visibility of a resource - a tad bit different from what we are dealing with here. Unfortunately, I think you'll have to point out what is not RESTful about the proposed solution? FTR, I might actuallyPOST to /canceled_orders since I am adding to a collection (POST(a)) - since trying to decide on that one.
On Nov 15, 2007 10:43 PM, Stian Soiland <stian@...> wrote:
> On 11/9/07, Steve Loughran <steve.loughran.soapbuilders@...> wrote:
> > I want to use XMPP as the routing protocol; to send data to things
> > (processes) deployed on machines without fixed IP addresses (say on
> > Ec2 or similar). I also want the option of using the same front end to
> > talk to classic HTTP(S) accessible systems.
>
> Firstly, did you look at http://code.google.com/p/xeerkat/ ? (I don't
> know the project, but it's the second google hit for p2p rest)
I didnt do that research; a bit of delegation by asking the list :)
>
>
> I've been thinking about a similar setup for my (still quite
> theoretical) distributed file system. However, I wouldn't want to use
> XMPP as it still requires a server setup, and would like to use a true
> p2p system such as JXTA or something simpled built on distributed hash
> tables.
we have anubis, a fault tolerant/partition aware tuple space for a
single physical site (it uses multicast ip). I wouldn't try and
retrofit REST to a t-space, as it is its own unique way of sharing
information without nodes knowing each other. But even there, I've
thought of adding a URI just so that I can have it serve up artifacts
for RMI to handle in its classloader list -I'd publish code in the
space and have RMI download JARs on demand from anything that held
them. But really, I'd be better off not using RMI :)
>
> I agree that having a jabber account per resource would be very modern
> and decoupled and all that. (or at least unique "to" addresses - maybe
> there is a way to hack multiple addresses from a single account in
> Jabber with some prefix or postfix? Something similar to how qmail can
> be configured to receive emails to
> youraccount-whateveryouwant@...)
As was pointed out to me in private email, every address in XMPP has
an ID and a location, which is now xeerkat works. they also include
the return address in the request:
xeerkat://{sender-id}/{sender-resource}/{receiver-id}/{receiver-resource}/{path}
I would prefer an http header, though obviously not a single line
equivalent of WS-Addressing.
>
> If you use p2p groups you can just use some UUID (or even a virtual
> HTTP URI) as the resource identifiers and submit it to the p2p message
> group in a HTTP-like message:
>
> PUT http://myapp.com/p2p/nodes/a87497f6-65b0-4c0d-a988-4f5f1ccdea88/ip4
> Content-Type: text/plain
>
> 156.282.22.14:2331
>
> could for instance be a way a node registered itself. (Coupled with
> some appropriate signatures to prove that the PUT came from
> a87497f6-65b0-4c0d-a988-4f5f1ccdea88). Note that there wouldn't
> necessarily be anything on http://myapp.com/p2p/ - the URIs only serve
> as identificators within the application. Requests would be sent out
> on the p2p group similarly:
>
> GET http://myapp.com/p2p/nodes/a87497f6-65b0-4c0d-a988-4f5f1ccdea88/last_seen
>
> The nodes who according to say the distributed hash table knows about
> a87497f6-65b0-4c0d-a988-4f5f1ccdea88 would then reply with an
> appropriate representation of the resource, even if the node
> a87497f6-65b0-4c0d-a988-4f5f1ccdea88 himself was now even offline.
>
> Of course you could have resources that are living freely outside "nodes":
>
> PUT http://myapp.com/p2p/notes/7f5fd142-af5f-4c2a-9395-7df4f51fdc04
> Content-Type: text/plain
>
> This is a note.
>
>
> Depending on which p2p framework is used, it might or might not be
> necessary to add some headers to identify that particular GET and POST
> requests, who's requesting, and who's replying. In addition you would
> have issues with authority, for instance it could be difficult for
> anyone to say 404, because all except the responsible nodes wouldn't
> know about the resource, and so you can't know if it's a real 404 or
> just connection timed out. (which doesn't have a HTTP status code
> ASFAK).
>
>
> I'm not so sure how clever it is to use "http://" URIs in such an
> application, as it would not distinguish between real HTTP URLs and
> p2p-based ones, so a different protocol name is probably preferable,
> for instance urn:myapp.com:p2p/nodes/a87497f6-65b0-4c0d-a988-4f5f1ccdea88
If you used a different URI, you could patch a new URI handler into
the Java root classloader, so anything that resolved URLs would be
able to open your nodes. I think you can also do the same to IE,
though it is probably hard to do and even harder to do securely.
Interesting thought though, and similar to what I was thinking of for
bonding to our tuplespace.
-steve
* Paul Winkler <pw_lists@...> [2007-11-14 04:15]: > On Tue, Nov 13, 2007 at 10:54:27AM +0100, Stefan Tilkov wrote: > > On Nov 13, 2007, at 12:33 AM, Rajith Attapattu wrote: > > > http://simplewebservices.org/index.php?title=Shopping > > > > Maybe I'm missing something, but I've never seen this > > alternative before, and I can't convince myself to like it. > > I don't see any constraints being violated. Resources are named > by (opaque) URIs, the uniform interface is observed, everything > is highly cacheable, the server doesn't know or care anything > about the state of the client, and app state is firmly > controlled by the client leveraging nothing but hypermedia. I agree that from a REST perspective, that approach is fine. In practice I advise against it, though: URIs may be opaque for the purposes of REST, but humans interpret them all the time (and indeed this is what “URI design” is about), and encoding the cart contents into the URI means that anyone who can glean the URI (eg any intermediaries, attackers performing XSS, etc pp), can infer the contents of a user’s shopping cart, whether or not the user wanted to disclose that information. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I know this might not bode well with the practical , industry working people in this list , but I thought I might as well ask ... Has anyone done any formal models of REST? Using any of the numerous process calculi or introducing their own , shown what mathematical constraints are required for a RESTful design ? Regards, dev
[ Attachment content not displayed ]
Hiya .. I recently gave a couple of talks on REST vs. WS-*: Facts, Myths and Lies. See: http://sanjiva.weerawarana.org/2007/11/rest-vs-ws-facts-myths-and-lies.html I'm sure a few people here will disagree with me on some of my views on REST ;-). I'd be very happy to be educated and also to debate some of the details - comments on my blog are preferred but I'll be happy to follow up on this list too. (I've been a lurker for a while!) Bye, Sanjiva. -- Sanjiva Weerawarana, Ph.D. Founder & Director; Lanka Software Foundation; http://www.opensource.lk/ Founder, Chairman & CEO; WSO2, Inc.; http://www.wso2.com/ Member; Apache Software Foundation; http://www.apache.org/ Visiting Lecturer; University of Moratuwa; http://www.cse.mrt.ac.lk/
FWIW, my response can be found here; http://tech.groups.yahoo.com/group/service-orientated-architecture/message/9353 Mark.
Ouch. Not a bad idea, but ick, this guy has confused a few things, to wit; "A simple fact about HTTP is both its greatest strength and its central weakness: HTTP is a stateless protocol. Each request to an HTTP server resource is meant to be idempotent, which is to say the same request should return the same result at each invocation. Idempotency is the central idea in REST: the same request perhaps encoding client information should return the same data whenever it is made." Oopsie! Mark.
On Nov 22, 2007 6:31 PM, Sanjiva Weerawarana <sanjiva@...> wrote: > Hiya .. I recently gave a couple of talks on REST vs. WS-*: Facts, Myths > and Lies. See: > > http://sanjiva.weerawarana.org/2007/11/rest-vs-ws-facts-myths-and-lies.html > > I'm sure a few people here will disagree with me on some of my views on > REST ;-). why would we do that ? I mean, I know I have some differences between us on the value of WSDL, WS-A, and all the WS -callback stuff, but I think even you are starting to suspect that WS-* doesnt work. > I'd be very happy to be educated and also to debate some of the > details - comments on my blog are preferred but I'll be happy to follow up > on this list too. One of the things I dont think has worked is this hard split between 'middleware implementor' and end developer. Its something that all the contract-last stacks have tried to do, hide you from the headers, from WS-A, etc. And the end users -for a fee- can do that. But then something goes wrong and they are left trying to sniff SOAP requests off the wire with ethereal and make sense of WS-A there and then, or they are left trying to debug a bit of WS-* interop over the phone with an external caller. In SOAP, you can run from XML, but you can't hide. And trying to convince the paying customer that they dont need to understand XSD is wishful thinking on everyone's part. Because you will end up staring at the schemas, or even stepping into the SOAP stack to identify why fault information is being lost. Admittedly, part of the problem is not SOAP, it is the stacks that try and map from Xml to Object and back again. And I've encountered REST code this week which does exactly that -its the typica API to talk to amazon EC2. And just like SOAP Stacks, its brittle against change; the move to java6 is enough to break it as the XSD import code is finding ambiguity with ##local being in ##any, whereas in java5 it seemingly wasnt. But that doesnt say SOAP is good, only that anyone who tries to hide from the XML, or anyone who thinks XSD is a good language for representing XML document syntax is in trouble, The nice thing about REST is you don't have to do that. Regarding REST authentication, yes, it varies. Basic auth over https works nicely, other things, well, they are non standard. As I am working with Amazon S3 and EC2 this month, I'm getting fairly familiiar with their signing stuff, which is very non standard, but nice: you sign the headers, including the md5 checksum of the data you are about to load; stick it in as an extra query. It may be proprietary, but it is common to all the AWS services, S3, EC2 and AWS. So you write it once and be done with it, or take a copy that works from someone else. Its a lot easier than implementing WS-Security , believe me. Things we agree on "Distributed computing is hard no matter what!" welcome to the big projects.; "If writing services, write them so you can offer either a RESTful interface or a WS-* one" why? So you double your document+test+example client+maintenance engineering effort? Returning to AWS, their S3 doc is cluttered by all this up-front protocol neutrality, before having 30pages of SOAP you have throw away before you get to the interesting stuff. GET, PUT, DELETE. And it is so nice. do a PUT to a host that isnt there, create a new host. do a GET on a host/bucket that is not there, get a 404+ text back. Try it! You don't even need a new client, just the browser in front of you : http://hello.sanjiva.s3.amazonaws.com/ HTTP/1.1 404 Not Found x-amz-request-id: 2D4A2D0DFCB1190C x-amz-id-2: kFK/F3fWy2UcGCtlndSL1wQvIF2AI0p1oGBvF3frHN5HSMc5aKKk/6WjBoxdOhek Content-Type: application/xml Date: Fri, 23 Nov 2007 23:12:14 GMT Connection: close Server: AmazonS3 <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>NoSuchBucket</Code> <Message>The specified bucket does not exist</Message> <RequestId>11FFFDAC076F89E8</RequestId> <BucketName>hello.sanjiva/BucketName> <HostId> eKJoiWj1kfzfZNjM3QZw/tqOX4yQtwPhzzO/mXO+MhoaasZI2vc8LgSiStKIMXj0 </HostId> </Error> There, isnt that nice. Its not perfect, they should have used SOAPFault, which is, to me, the only thing that SOAP got almost right. I say almost, as the fact that you can only send a SOAPFault with a 500 error code, not here with a 404, or somewhere else with a 3XX. But once I stop using third party libraries that suffed JAXB grief with the xsd for this error format, move to Xom and XPath, I can handle what amazon send back. -steve >(I've been a lurker for a while!) welcome
On Nov 14, 2007 7:23 PM, Mark Baker <distobj@...> wrote: > On 11/14/07, Suyog <suyog.gaidhani@...> wrote: > > Hi all, > > > > We have an application wherein we need to fetch a result set based on a > > filter criteria. The filter criteria cannot be specified as part of the > > URI as it is a fairly complex object that needs to get into the request > > body. So my request-line will be GET \data with the filter criteria > > coming in via the entity body. However, from various sources, I gathered > > that it is not recommended to send data as part of the entity body when > > using GET, DELETE and so on. However, I could not find this mentioned > > explicitly in the HTTP standard. > > The spec permits it, but you could conceivably run into problems with > deployed software and SDKs which don't. most likely, the software that is broken will be someone else's proxy server that they can't bypass > AFAIK, the possibility for > its use in the wild is completely untested. Of course, this would be a very interesting experiment if someone were to do it, I just wouldn't make it a prerequsite for my code working, and its something you'd want to assess across a broad spread of network configurations (more than just planet-lab, you'd want the corporate firewall and home ISP coverage too). It would make for a good paper. -steve (unrelated trivia: if you mount a network drive on winXP with a FQDN, such as z: \\chamonix.hpl.hp.com\steve, the OS does a WEBDAV PropGet against port 80 before switching to SMB requests. Clearly you can try cutting edge stuff -content in a GET- but be prepared to fall back to something that works, like http headers)
On Nov 23, 2007, at 11:22 PM, Berend de Boer wrote: > If you do this, you can track a not yet logged in user perfectly > fine. No url rewriting needed. No cookies needed. For solving the problem you mention - maintaining information about unauthenticated visitors - cookies seem like a good solution: if the user has disabled them, everything will still work; the browser sends them along automatically; URLs don't need to change; JavaScript doesn't need to be enabled. I don't see "no cookies" as a goal in itself. What are the benefits of your approach over using them? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Stefan Tilkov wrote: > For solving the problem you mention - maintaining information about > unauthenticated visitors - cookies seem like a good solution: This still violates the fundamental principle that each resource is identified by at least one URL, and not by something outside the URL. Thus it breaks links, bookmarking, and all, the usual suspects. If resources such as a user's shopping cart or a particular user's preferences are generated dynamically, then URLs pointing to these resources shoudl also be generated dynamically and the user should be redirected to those resources. Do not think of this as hiding session information in the URL. Properly done, it's not. Think of it as dynamic creation of resources. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 11/24/07, Elliotte Harold <elharo@...> wrote: > If resources such as a user's shopping cart or a particular user's > preferences are generated dynamically, then URLs pointing to these > resources shoudl also be generated dynamically and the user should be > redirected to those resources. Agreed, but I don't think cookies are the problem in that case: one could do the same damage with, say, Apache Basic Auth.
On Nov 24, 2007, at 10:59 AM, Berend de Boer wrote: >>>>>> "Stefan" == Stefan Tilkov <stefan.tilkov@...> writes: > > Stefan> I don't see "no cookies" as a goal in itself. What are the > Stefan> benefits of your approach over using them? > > Cookies can, and are, used to track user behaviour. Tools like SpyBot > S&D now warn and delete such cookies. Cookies have serious privacy > concerns. There are caching issues as well as cookies are not > described > in the spec. > > Just enable the "ask" option for cookies and then surf the web. > Everyone > wants to store data on your pc these days. > But my point is that your suggestion seems to reinvent the same thing. After all, your use case seems to be tracking user behavior. Or did I misunderstand you? Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/ > -- > Cheers, > > Berend de Boer
On Nov 24, 2007, at 3:27 PM, Elliotte Harold wrote: > Stefan Tilkov wrote: > > > For solving the problem you mention - maintaining information about > > unauthenticated visitors - cookies seem like a good solution: > > This still violates the fundamental principle that each resource is > identified by at least one URL, and not by something outside the URL. > Thus it breaks links, bookmarking, and all, the usual suspects. > > ? I agree on all the fundamentals of RESTful design. I'm opposed to cookies in general. But, *if* I want to track unauthenticated users, cookies seem like a good option since I can decide to have everything identifiable by a URI and use them as a purely optional aspect. One that you, for example, can opt out of by turning cookies off in your browser - i.e., if I use cookies, I'm at least open about what I'm doing. > If resources such as a user's shopping cart or a particular user's > preferences are generated dynamically, then URLs pointing to these > resources shoudl also be generated dynamically and the user should be > redirected to those resources. > +1. > > > Do not think of this as hiding session information in the URL. > Properly > done, it's not. Think of it as dynamic creation of resources. > > +1, too. Again, my point is that using cookies as an optional part doesn't seem to be that bad. But maybe I'm still missing something? Stefan > -- > Elliotte Rusty Harold elharo@... > Java I/O 2nd Edition Just Published! > http://www.cafeaulait.org/books/javaio2/ > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ > >
This is an excellent and creative idea, Berend. Kudos. On Nov 23, 2007 2:22 PM, Berend de Boer <berend@...> wrote: > Hi Guys, > > Just got an idea that looks promising. There's only one instance where > cookies seem to be necessary, that is for shoppers who come to the > website and haven't identified themselves. So you give them a cookie and > you track what they're doing. > > Some suggested solutions: > > 1. Client should cache this: > - It might be too much you want to cache at the client > - We don't have such a tool yet, perhaps Google Gears in the future? > > 2. URL rewriting: works, but somewhat annoying to make it work, either > you need to rewrite most if not all the urls in your output, or you > use relative urls and have a prefix which is a uuid or so, so you can > track unique visitors. > > > But there might be another solution, if JavaScript is enabled: > > 1. Create a unique id on the client (or the server can hand out one). > > 2. Use this unique id with XmlHttpRequest as the user name in > authenticating the user. Empty password would suffice. > > 3. The server should handle those kinds of logins by creating a new > account with that id. > > 4. If you login to the domain, i.e. "/", every request will contain the > authentication header. You could actually authenticate the requests > or not, that doesn't matter. The browser sends the username anyway. > > If you do this, you can track a not yet logged in user perfectly > fine. No url rewriting needed. No cookies needed. > > Drawbacks: need JavaScript. > > -- > Let me know your thoughts, > > Berend de Boer >
* Elliotte Harold <elharo@...> [2007-11-24 15:30]: > If resources such as a user's shopping cart or a particular > user's preferences are generated dynamically, then URLs > pointing to these resources shoudl also be generated > dynamically and the user should be redirected to those > resources. No one said you should use the cookie to address the one of multiple shopping carts hiding behind the same URI. The only purpose of the cookie in Stefan’s scheme is to track the movements of users who have not identified themselves through HTTP Auth. It also allows you to associate a shopping cart resource created for a particular cookie’d anonymous user with a named user’s account if the anonymous user later decides to log in to proceed with his order. (Then you find HTTP Auth credentials and a cookie with an ID in the same HTTP request, so you know they belong to each other.) None of this implies that client-accessible state such as the shopping cart be hidden behind a gateway URI, and the messages remain entirely self-descriptive and conformant to the uniform interface throughout. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Berend de Boer <berend@...> [2007-11-25 19:20]: > 2. Caching works. I don’t see how the caching situation is any different between the scenarios where the response varies on the auth request headers or on the cookie request headers. In both cases you’re exchanging personalised messages. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote: > No one said you should use the cookie to address the one of > multiple shopping carts hiding behind the same URI. The only > purpose of the cookie in Stefan’s scheme is to track the > movements of users who have not identified themselves through > HTTP Auth. Good point. I suppose I could swallow this provided the user still saw the same pages and the site still worked if cookies were turned off. (That's not what I see in practice though.) In other words if the only purpose of the cookie were to gather knowlege about how users navigate through a site. > It also allows you to associate a shopping cart resource created > for a particular cookie’d anonymous user with a named user’s > account if the anonymous user later decides to log in to proceed > with his order. (Then you find HTTP Auth credentials and a cookie > with an ID in the same HTTP request, so you know they belong to > each other.) No, that's not a good use of cookies. Here once again the shopping cart needs to be identified by a visible URL, not a cookie. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Stefan Tilkov wrote: > I agree on all the fundamentals of RESTful design. I'm opposed to > cookies in general. But, *if* I want to track unauthenticated users, > cookies seem like a good option since I can decide to have everything > identifiable by a URI and use them as a purely optional aspect. Tracking for its own sake does not violate the web architecture. However you run into trouble as soon as you take the next step and use the tracked path to determine which resource or representation you serve to the client, rather than having that depend purely on the URL and the relevant HTTP headers. In practice I rarely see sites tracking users without also doing something obvious with those tracks. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 11/25/07, A. Pagaltzis <pagaltzis@...> wrote: > I don't see how the caching situation is any different between > the scenarios where the response varies on the auth request > headers or on the cookie request headers. In both cases you're > exchanging personalised messages. Correct me if I'm wrong, but *should* the response vary on the auth request headers? I thought those were only supposed to determine whether or not the page was served, not what the content thereof was. So a shopping cart, for instance, would still have to have a unique URL per user. And rightly so... perhaps the customer's assigned sales rep should be allowed to see his cart. Perhaps wholesale customers are permitted to see parts of the catalog retail customers aren't, so are authorized for more product pages (but changing the price on a product page based on the authentication of the user, would be a no-no).
>>>>> "A" == A Pagaltzis <pagaltzis@...> writes:
A> I don’t see how the caching situation is any different between
A> the scenarios where the response varies on the auth request
A> headers or on the cookie request headers. In both cases you’re
A> exchanging personalised messages.
As I understand the spec, you can cache authenticated responses, if the
public directive is set. Which isn't useful for the page itself, but
might be for parts of the page that are loaded dynamically.
I've never seen the domain feature of authentication work, so everyone
authenticates the whole domain, but this might help to have caching for
parts that can be cached.
But the point was more directed at cookies: if cookies are present, the
response isn't cacheable.
--
Cheers,
Berend de Boer
On Nov 25, 2007, at 8:07 PM, Berend de Boer wrote: >>>>>> "Karen" == Karen <karen.cravens@...> writes: > > Karen> Correct me if I'm wrong, but *should* the response vary on > Karen> the auth request headers? I thought those were only > supposed > Karen> to determine whether or not the page was served, not what > the > Karen> content thereof was. > > Interesting point. Not sure what the authoritative answer is, but it > looks to me that can't be true. You would never be able to say > "Welcome > Karen", based on authentication. May be this was asked here before, but what is the practical value behind this view? Subbu
Berend de Boer wrote: > >>>>>> "Karen" == Karen <karen.cravens@...> writes: > > Karen> Correct me if I'm wrong, but *should* the response vary on > Karen> the auth request headers? I thought those were only supposed > Karen> to determine whether or not the page was served, not what the > Karen> content thereof was. > >Interesting point. Not sure what the authoritative answer is, but it >looks to me that can't be true. You would never be able to say "Welcome >Karen", based on authentication. > Sure you can, provided you send a "Vary: Authorization" header with the response. If the user is not authenticated, the representation which corresponds to no Authorization header can say "Welcome Stranger". The no-auth representation would set cache-control: public, while Karen's personalized represenatation would set cache-control: private. -Eric
On 11/25/07, Berend de Boer <berend@...> wrote: > Interesting point. Not sure what the authoritative answer is, but it > looks to me that can't be true. You would never be able to say "Welcome > Karen", based on authentication. I would say that's just window dressing, in that case. It's not something you'd serve in the straight-up XML, or JSON, or whatever - it's strictly for human consumption. Now, if I reauthenticate with a different identity, and use my back button, then I should still be able to follow links and fill out forms on that page in my browser, and the new credentials should function seamlessly. I suppose it's arguable whether you should, for instance, pre-fill in user information in the "value" field of HTML forms, especially hidden ones. On the one hand, that's not going to function seamlessly in the above case without JavaScript to adjust at least the hidden values, but if you could depend on JS to do it you could have JS fill in the hidden forms with the at-the-moment current value just like any other full-featured REST client. But on the other, sometimes you're having to cope with a human+dumb browser as your REST client, and you're just going to have to hope you've given the human enough clues to hit "reload" where appropriate.
Berend de Boer wrote: > >>>>>> "A" == A Pagaltzis <pagaltzis@...> writes: > > A> I don’t see how the caching situation is any different between > A> the scenarios where the response varies on the auth request > A> headers or on the cookie request headers. In both cases you’re > A> exchanging personalised messages. > >As I understand the spec, you can cache authenticated responses, if the >public directive is set. Which isn't useful for the page itself, but >might be for parts of the page that are loaded dynamically. > I always thought of this as useful for users authenticated site-wide, accessing static, public pages on a site. With an Authentication header in the request, caching must be explicit, otherwise each user will get their own individual copy of non-personalized pages, since the default cache behavior is for all content requested with Authentication headers to be treated as "Cache-Control: private". > >But the point was more directed at cookies: if cookies are present, the >response isn't cacheable. > You can send a "Cache-Control: no-cache=Set-Cookie" header to allow caching of a page, but not its Set-Cookie header. I think I read that in reality, that's how most caches behave anyway -- stripping the Set-Cookie header regardless, while otherwise caching the response (unless no-cache is set). -Eric
* Karen <karen.cravens@...> [2007-11-26 00:50]: > On 11/25/07, A. Pagaltzis <pagaltzis@...> wrote: > > I don't see how the caching situation is any different > > between the scenarios where the response varies on the auth > > request headers or on the cookie request headers. In both > > cases you're exchanging personalised messages. > > Correct me if I'm wrong, but *should* the response vary on the > auth request headers? Why not? Note I am talking *only* about things like a product page with a link to the shopping cart at the top where the link target URI varies based on which user is authenticated. The data about the product and other main page content is the same for all users. So you’re not addressing a different resource, just twiddling the representation returned. That’s perfectly RESTful. There is no addressing information outside the URI. What is lost with this design compared to an approach that puts a per-user- or per-session token in the URI is that you cannot link me to the version of the product page that shows the link to *your* cart at the top (rather than mine, based on the auth headers I am sending). That seems like no big loss. In fact, it is probably none at all, because if you gave me such a link and I tried to follow it, chances in practice are I’d get an error because the auth credentials in my request do not match those implied by the URI token. > So a shopping cart, for instance, would still have to have a > unique URL per user. [… snip …] Sure – that’s how I’d do it. * Karen <karen.cravens@...> [2007-11-26 05:45]: > On 11/25/07, Berend de Boer <berend@...> wrote: > > Interesting point. Not sure what the authoritative answer is, > > but it looks to me that can't be true. You would never be > > able to say "Welcome Karen", based on authentication. > > I would say that's just window dressing, in that case. It's not > something you'd serve in the straight-up XML, or JSON, or > whatever - it's strictly for human consumption. I subscribe to the Leonard/Ruby view: a web site is a web service is a web site. Regardless of whether you are designing for human or machine consumption, the same considerations apply. > Now, if I reauthenticate with a different identity, and use my > back button, then I should still be able to follow links and > fill out forms on that page in my browser, and the new > credentials should function seamlessly. That probably won’t work. The link to the shopping cart would continue to point to the other account’s cart, which you are probably no longer authorised to view. Only if you put all the shopping carts behind a gateway URI can that link continue to work. But we already agree that this is bad and violates REST. And if you do not vary the page content based on authentication, but rather put a user or session token in the URI, as Elliotte is proposing, then going back with your Back button won’t work at all because you will no longer be authorised to view those pages or the pages they link to (which also have the token in the URI, of course) either. > I suppose it's arguable whether you should, for instance, > pre-fill in user information in the "value" field of HTML > forms, especially hidden ones. On the one hand, that's not > going to function seamlessly in the above case without > JavaScript to adjust at least the hidden values, but if you > could depend on JS to do it you could have JS fill in the > hidden forms with the at-the-moment current value just like any > other full-featured REST client. If you can depend on JS you can solve the personalisation issue *and* the caching issue by using DOM scripting and XMLHttpRequest to stitch *all* the personalised bits into the page, allowing you to serve the main content in unpersonalised representations that are trivially cachable. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Berend de Boer <berend@...> [2007-11-26 05:10]: > * A Pagaltzis <pagaltzis@...> writes: > > I don’t see how the caching situation is any different > > between the scenarios where the response varies on the auth > > request headers or on the cookie request headers. In both > > cases you’re exchanging personalised messages. > > As I understand the spec, you can cache authenticated > responses, if the public directive is set. My point is about personalised representations. You can’t usefully cache those response anyway, because the personalised bits are supposed to differ for each client (more or less). It doesn’t matter whether the request header that causes this variation is an auth header or a cookie header – the responses are equally uncachable anyhow. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Elliotte Harold <elharo@...> [2007-11-26 00:30]: > A. Pagaltzis wrote: > > It also allows you to associate a shopping cart resource > > created for a particular cookie’d anonymous user with a named > > user’s account if the anonymous user later decides to log in > > to proceed with his order. (Then you find HTTP Auth > > credentials and a cookie with an ID in the same HTTP request, > > so you know they belong to each other.) > > No, that's not a good use of cookies. Here once again the > shopping cart needs to be identified by a visible URL, not a > cookie. Addressability of the shopping cart is orthogonal. The situation is this: an anonymous user started putting items in a shopping cart. In the midst of her shopping tour, she realises she was logged out, so she logs in. You do *not* want her to lose the items she put in her shopping cart so far; you have to associate that shopping cart with the account she logged in to. Whether the shopping cart was explicitly addressable or hidden behind a gateway URI is inconsequential. But if you want to do it without URI rewriting, you *need* the cookie to associate an anonymous user with a cart. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 11/25/07, A. Pagaltzis <pagaltzis@...> wrote: > > > > > > > * Elliotte Harold <elharo@...> [2007-11-26 00:30]: > > A. Pagaltzis wrote: > > > It also allows you to associate a shopping cart resource > > > created for a particular cookie'd anonymous user with a named > > > user's account if the anonymous user later decides to log in > > > to proceed with his order. (Then you find HTTP Auth > > > credentials and a cookie with an ID in the same HTTP request, > > > so you know they belong to each other.) > > > > No, that's not a good use of cookies. Here once again the > > shopping cart needs to be identified by a visible URL, not a > > cookie. > > Addressability of the shopping cart is orthogonal. > > The situation is this: an anonymous user started putting items in > a shopping cart. In the midst of her shopping tour, she realises > she was logged out, so she logs in. You do *not* want her to lose > the items she put in her shopping cart so far; you have to > associate that shopping cart with the account she logged in to. > > Whether the shopping cart was explicitly addressable or hidden > behind a gateway URI is inconsequential. But if you want to do > it without URI rewriting, you *need* the cookie to associate an > anonymous user with a cart. Out of, curiosity what shopping sites use HTTP authentication, that you no longer need a cookie after you log in? Assaf > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/>
On Nov 26, 2007, at 12:25 AM, Elliotte Harold wrote: > A. Pagaltzis wrote: > > > No one said you should use the cookie to address the one of > > multiple shopping carts hiding behind the same URI. The only > > purpose of the cookie in Stefans scheme is to track the > > movements of users who have not identified themselves through > > HTTP Auth. > > Good point. I suppose I could swallow this provided the user still saw > the same pages and the site still worked if cookies were turned off. > (That's not what I see in practice though.) In other words if the only > purpose of the cookie were to gather knowlege about how users navigate > through a site. > > That was exactly what I was getting at -- it seems to me cookies are a good solution for this particular problem. Of course they're abused 95% percent of the time. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
>>>>> "A" == A Pagaltzis <pagaltzis@...> writes:
A> It doesn’t matter whether the request header that causes this
A> variation is an auth header or a cookie header – the responses
A> are equally uncachable anyhow.
No, that's not true. If I specify public as the cache directive and vary
the authorization header, the response is perfectly cacheable.
These cases are not necessary identical.
--
Cheers,
Berend de Boer
* Berend de Boer <berend@...> [2007-11-26 08:45]: > * A Pagaltzis <pagaltzis@...> writes: > > It doesn’t matter whether the request header that causes this > > variation is an auth header or a cookie header – the > > responses are equally uncachable anyhow. > > No, that's not true. If I specify public as the cache directive > and vary the authorization header, the response is perfectly > cacheable. What does that help? Whether the cache is permitted to cache a response or not is irrelevant: you don’t a cache to store a page personalised for user A, and then deliver it to user B. Protocol- level cachability is entirely inconsequential to the fact that caching personalised representations is undesirable. Nor does it matter whether the personalisation is driven by cookies or HTTP auth. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Assaf Arkin <assaf@...> [2007-11-26 07:20]:
> Out of, curiosity what shopping sites use HTTP authentication,
> that you no longer need a cookie after you log in?
None. HTTP auth in browsers is too broken to rely on it,
unfortunately. :-(
One thing is the interop shakiness you run into if you go beyond
Basic Auth.
But the worst part is logging out. PAUSE (Perl Authors Upload
Server, the authenticated face of CPAN) f.ex. doesn’t have a
logout button – rather there’s an “About Logging Out” page, and I
quote:
== Short version ==
Over the years I have found the following methods of logging
out. None of them is guaranteed to work. Different browsers
fail in different ways at different versions. Please verify
that you are effectively logged out by your browser.
You may need to click on Cancel when your browser asks you to
login.
• Redirect with Cookie
• Redirect to Badname:Badpass@Server URL
• Quick direct 401
That’s… uhhh… not end user friendly. PAUSE gets away with it by
virtue of its audience.
It’s a crying shame, because trying to do the same things with
cookie-based auth as you’d do with HTTP Auth causes a number of
problems in a variety of circumstances. Educational thought
experiment: situation: a user POSTs something after their login
has timed out; task: come up with a way to handle this sensibly
under cookies and under HTTP auth, and compare.
Conclusion: cookies suck. Alas… :-(
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote: > Whether the shopping cart was explicitly addressable or hidden > behind a gateway URI is inconsequential. But if you want to do > it without URI rewriting, you *need* the cookie to associate an > anonymous user with a cart. Well first of all, it's perfectly OK to do that with URL rewriting (or redirection). Doing it without URL rewriting is an implementation decision, not a requirement. But even if you insist on not using URL rewriting, you can still do it without cookies. Just store the shopping cart URL in a hidden form field in the login screen or check the referer to the login screen. There are probably other tricks you can use with Java or JavaScript. But ultimately the right way to do this is to assign persistent URLs to identifiable resources, and rely on them to identify the items of interest. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Stefan Tilkov wrote: >> Good point. I suppose I could swallow this provided the user still saw >> the same pages and the site still worked if cookies were turned off. >> (That's not what I see in practice though.) In other words if the only >> purpose of the cookie were to gather knowlege about how users navigate >> through a site. >> >> > > That was exactly what I was getting at -- it seems to me cookies are a > good solution for this particular problem. Of course they're abused 95% > percent of the time. Only 95%? You're an optimist. :-) And now that I think about it, I suspect that cookies aren't really necessary even here. I have seen log file analyzers that are smart enough to provide this sort of tracking information without relying on cookies. You simply need to pay attention to the address and time of various requests to link them up. It's not perfect, but it's more than good enough for practical work. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Hi All- In refactoring a digital library application to hue closer to RESTful architectural principles, I settled on this approach to athentication (I don't mean to suggest this is completely RESTful, rather it's as close as I could practically get in the authentication bit). 1. Any resource identified by a URL that does NOT inlude a user_id is accessible by ANY user and is always the EXACT same representation (thus cacheable). (Note that personalization is necessary and I'll address that below). 2. Users have their own user collections and slideshows and these are addressed by a URL which includes the user_id -- these resources are thus naturally 'personal', i.e. intended for and accessible by that user only 3. Anonymous users can't do much other than see a list of collections available (these are not user collections, but rather, top level, e.g. "Art & Art History Collection") so they need to log in. Note: Berend's idea for anonymous yet trackable users might be perfect here. 4. Upon login , the server sets two cookies: DASE_USER which simply contains the user's user_id and DASE_AUTH which contains an encrypted string which the server will be able to decode to extract the user_id. (Security note -- vulnerable to replay attacks, but there are a few ways those dangers might be mitigated & besides, pretty good security is what we are after) 5. As a logged in user uses the site they always have a menu of their 'user collections & slideshows' available in a menu. This is where we use the two cookies: on every request to a non user-specific page, a second XMLHTTPRequest is made to grab a json data structure of all of the user's personal data. The url that XHR uses to make the request includes, of course, the user_id which it got from that plain text cookie. Before the server sends back the data, it decrypts the secure cookie to make sure that the user_id in the URL matches the user_id in the secure cookie. After obtaining the json data from the server, the page inserts the user data into the dom tree and thus the page is personalized (this is just what Aristotle described a few messages back). 6. While accessing 'personal' pages (user_id is in URL) all requests authenticate by comparing the url user_id to the secure cookie user_id. 7. But personalized interactions on a non-personalized page (i.e.'add to cart'/'remove from cart' link by each thumbnail in a search results page) are all done by way of XHR hijacking the link click and sending the appropriate http request (POST or DELETE) with a URL that includes the user_id (note: 'cart' is just a specialized case of a user collection). It's actually a pretty simple approach and has been not too difficult to implement. If anyone has suggestions for a better/simpler and/or more RESTful approach, I would love to hear them. The project (built w/ PHP5/(MySQL|PostgreSQL|SQLite|XML) will be released as open source software with a target audience of higher ed folks (it's being developed at the University of Texas at Austin), so simplicity of design is critical. I need it to be as simple to install/hack/maintain/extend as Wordpress. Scalabilty is also important and my initial benchmarking (leveraging extensive application-side file-based caching -- thanks REST!) show marked gains over the previous architecture in that regard. Peter Keane daseproject.org
> pkeane> 4. Upon login , the server sets two cookies: DASE_USER which > pkeane> simply contains the user's user_id and DASE_AUTH which > pkeane> contains an encrypted string which the server will be able > pkeane> to decode to extract the user_id. (Security note -- > pkeane> vulnerable to replay attacks, but there are a few ways those > pkeane> dangers might be mitigated & besides, pretty good security > pkeane> is what we are after) > > You don't need this. On the backend you have the user who logged in > (REMOTE_USER), you use that to emit some JavaScript in the output that > pulls the user specific stuff with XHR. I'm not sure that'll work for me. The login method I use is NOT http basic auth (forgive my ignorance if I am mistaken, but it is the auth header from which the server gets REMOTE_USER?) unless I can use javascript to set the authentication information in the browser (I am unaware of how to do that w/o presenting the user with the little HTTP basic login window). The reason is that I use a pluggable authentication scheme that, in my case, uses our university's single sign-on system (that's a requirement and other universities will either have similar requirements or will use Shibboleth). In the case of single sign-on, the user is redirected to a university sign on page and then returned with a secure cookie that the authentication module in the backend (it actually uses a university-sanctioned apache module) know how to decrypt and get the user's id (Note: this is not unlike Google's AuthSub). It's THAT id that I use to set DASE_USER. Note that I am now done with the DASE_USER cookie as far as the server goes. It'll only be used by the client to generate appropriate XHR url's when necessary. It's a piece of session state, but it lives on the client, not the server. thanks- Peter Keane > > -- > Cheers, > > Berend de Boer >
Hi All- One of the great benefits I have found ot a RESTful application architecture is the ability to "compose" new resources using XSLT. The "document()" function allows me to bring in as many sources as necessary, and passing a URL as the argument to the document() function works fine. So any existing resources are fair game for remixing into new resources. The only problem is those resources that require authentication. I have implemented a system for passing a URL token that will allow authentication which is simply an md5 digest of the url plus some 'salt' that both the requestor and provider of the source document know, but this only works when both sides of the service know what that 'salt' is. In many cases, this works fine (sort of like the same-origin policy for xmlhttprequest). Anyone have further thoughts on authentication schemes for resources when the requestor (e.g., XSLT's document() function) has no way to set basic http auth headers? thanks- Peter Keane daseproject.org
i always explicitly mark authenticated responses with Cache-Control: private,..... i use mark nottingham's site as my guide on this issue: http://www.mnot.net/cache_docs/#CACHE-CONTROL <snip> # public marks authenticated responses as cacheable; normally, if HTTP authentication is required, responses are automatically uncacheable. # no-cache forces caches to submit the request to the origin server for validation before releasing a cached copy, every time. This is useful to assure that authentication is respected (in combination with public), or to maintain rigid freshness, without sacrificing all of the benefits of caching. </snip> Mike A On 11/29/07, Berend de Boer <berend@...> wrote: > Hi All, > > I'm trying to come to grips with the public cache-control directive and > it seems it behaves somewhat differently than I thought. According to > the spec, if it is present in a response: > > 3. If the response includes the "public" cache-control directive, > it MAY be returned in reply to any subsequent request. > > (section 14.8). > > Does that indicate that such a response can be served to: > > 1. Requests without Authorization header? > > 2. Requests with Authorization header, but no attempt is made to > authorize them? > > > I understood public as: > > 1. Can be stored in public cache. > > 2. But will only be served to a request with a valid Authorization > header, i.e. only to those identifying as the same http user that > previously requested it. > > Any light? > > -- > Thanks, > > Berend de Boer > > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
* pkeane <pkeane@...> [2007-11-29 15:10]: > Anyone have further thoughts on authentication schemes for > resources when the requestor (e.g., XSLT's document() function) > has no way to set basic http auth headers? http://www.mnot.net/blog/2005/10/18/libxslt_web ? Presumes that you have control of the XSLT processor at the client, of course. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Berend de Boer <berend@...> [2007-11-29 22:10]:
> I'm trying to come to grips with the public cache-control
> directive and it seems it behaves somewhat differently than I
> thought. According to the spec, if it is present in a response:
>
> 3. If the response includes the "public" cache-control
> directive, it MAY be returned in reply to any
> subsequent request.
>
> (section 14.8).
You only quoted half the text that’s relevant. Your excerpt is an
item in the list following this paragraph:
When a shared cache (see section 13.7) receives a request
containing an Authorization field, it MUST NOT return the
corresponding response as a reply to any other request,
^^^
unless one of the following specific exceptions holds:
^^^^^^
> Does that indicate that such a response can be served to:
>
> 1. Requests without Authorization header?
>
> 2. Requests with Authorization header, but no attempt is made
> to authorize them?
>
> I understood public as:
>
> 1. Can be stored in public cache.
>
> 2. But will only be served to a request with a valid
> Authorization header, i.e. only to those identifying as the
> same http user that previously requested it.
>
> Any light?
The terms I highlighted in the spec quotation should make
it pretty clear what the spec means. Note well that the word
“authenticated” makes no appearance in that context.
`Cache-Control: public` means “this representation isn’t private,
you are free to show it to other people.”
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
>>>>> "A" == A Pagaltzis <pagaltzis@...> writes:
A> The terms I highlighted in the spec quotation should make it
A> pretty clear what the spec means. Note well that the word
A> “authenticated” makes no appearance in that context.
A> `Cache-Control: public` means “this representation isn’t private,
A> you are free to show it to other people.”
That's indeed how I read the spec now.
But isn't it strange that the response doesn't appear in the cache until
it is first retrieved by an authenticated user? And after the first
authenticated request, everyone can see it?
That still confuses me. That's weird behaviour.
--
Cheers,
Berend de Boer
* Berend de Boer <berend@...> [2007-11-30 20:40]: > * A Pagaltzis <pagaltzis@...> writes: >> `Cache-Control: public` means “this representation isn’t >> private, you are free to show it to other people.” > > That's indeed how I read the spec now. > > But isn't it strange that the response doesn't appear in the > cache until it is first retrieved by an authenticated user? And > after the first authenticated request, everyone can see it? > > That still confuses me. That's weird behaviour. If that were the use case, it would be, but as it isn’t, it ain’t. (With apologies to Lewis Carrol.) Consider that clients will typically send auth credentials for *any* URI within a realm after seeing the first 401, and that without having seen a 401 for a specific resource, intermediaries have no way to know whether the origin server actually requires authorisation for it. `Cache-Control: public` adresses that by giving the origin server a way to tell proxies “ignore the authent credentials in the request, this resource doesn’t actually require authorisation.” Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
It seems to me like cookies are are regarded as something to be avoided
(and undoubtedly they are usually misused), but aren't there RESTful uses
of cookies that could/should be encouraged (i.e., built into frameworks,
etc)?
Per Ruby/Richardson (p.253):
"The server can suggest values for a cookie using Set-Cookie header, just
like it can suggest links the client might want to follow...[snip]. The
cookie is just a convenient container for application state, which makes
it's way to server in representations and URIs. That's a very RESTful use
of cookies."
Does it follow that personalization can BEST be achieved by having a
cookie that contains the user's id sit on the browser and used to
construct URLs for XMLHTTPRequests (e.g.,
http://example.com/userdata/{user-id}) that will return data to be
inserted into the page?
Note that I do not want the user-id to be included in the url for the page
itself (e.g. http://example.com/home). I am assuming that the login
process, which can use HTTP Auth, will give the server the opportunity to
set the cookie at the start of the login 'session'.
thoughts?
-Peter Keane
pkeane wrote:
> Does it follow that personalization can BEST be achieved by having a
> cookie that contains the user's id sit on the browser and used to
> construct URLs for XMLHTTPRequests (e.g.,
> http://example.com/userdata/{user-id}) that will return data to be
> inserted into the page?
>
> Note that I do not want the user-id to be included in the url for the page
> itself (e.g. http://example.com/home). I am assuming that the login
> process, which can use HTTP Auth, will give the server the opportunity to
> set the cookie at the start of the login 'session'.
>
> thoughts?
>
You're breaking REST then. One fundamental principle is that the URI
identifies the resource, nothing else. Addressing and authentication are
two separate concerns, and you're mixing them up. Personalized resources
require personalized URLs.
The personalized URLs don't actually have to contain the user name if
that bothers you for some reason. However they do have to be unique to
the user for whom the data is personalized.
It gets a little tricky when, as you describe here, one page contains
resources accumulated from multiple URLs, but full REST requires that
the page itself still have a unique, identifiable URL. The more you move
away from this the less well the Web will work for you.
--
Elliotte Rusty Harold elharo@...
Java I/O 2nd Edition Just Published!
http://www.cafeaulait.org/books/javaio2/
http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
> You're breaking REST then. One fundamental principle is that the URI > identifies the resource, nothing else. Addressing and authentication are > two separate concerns, and you're mixing them up. Personalized resources > require personalized URLs. > Actually, the URI DOES identify the resource and nothing else. It's the cookie (to be used only in the XHR 'personalizing" request) that handles identity (and NOT as part of the page request). Requesting the page URI returns the exact same resource no matter who the user is (and thus it's cacheable). It's the CLIENT that "decides" if it wants to take the next step and personalize the page by firing an XHR. [XHR = XMLHTTPRequest] > The personalized URLs don't actually have to contain the user name if > that bothers you for some reason. However they do have to be unique to > the user for whom the data is personalized. > No, I see no problem w/ user-id in the url, I was simply aiming to come up w/ a way to have a web application ALWAYS present some personalized data but still provide a generic url that could be shared with other users AND which could have all of the benefits that caching will provide. I see that this page could be seen as breaking REST, but how far up the stack does the system need to be RESTful? Everything below this browser-based view follows REST principles and offers all of the benefits of REST. It seems to me one of the goals of REST is to facilitate new uses/remixes of resources in lots of ways, many of which will not (and need not) be RESTful in the final use case. I don't mean to be argumentative -- just trying to find the edges/best-principles, etc. Perhaps I'd be better off "mixing" on the server side using XSLT and the document() function towards the same end while requiring initial browser request to include the user-id in the URL. thanks! Peter Keane > It gets a little tricky when, as you describe here, one page contains > resources accumulated from multiple URLs, but full REST requires that > the page itself still have a unique, identifiable URL. The more you move > away from this the less well the Web will work for you. > > -- > Elliotte Rusty Harold elharo@... > Java I/O 2nd Edition Just Published! > http://www.cafeaulait.org/books/javaio2/ > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ >
pkeane wrote: > Actually, the URI DOES identify the resource and nothing else. It's the > cookie (to be used only in the XHR 'personalizing" request) that handles > identity (and NOT as part of the page request). Requesting the page URI > returns the exact same resource no matter who the user is (and thus it's > cacheable). It's the CLIENT that "decides" if it wants to take the next > step and personalize the page by firing an XHR. > There's a fuzzy issue here of just what exactly constitutes a resource. There's a line beyond which sufficient client personalization has created a new resource, and such a resource should have its own URL. As a practical matter, I think sites work better when there are more URLs rather than fewer. States of an application/resource should be identified by URLs. Too few URLs is often problematic. I've yet to encounter a site where there were so many URLs I had problems using it, but I daily encounter sites that use one URL to identify things I'd like to link to or bookmark individually. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 12/1/07, pkeane <pkeane@...> wrote:
> It seems to me like cookies are are regarded as something to be avoided
> (and undoubtedly they are usually misused), but aren't there RESTful uses
> of cookies that could/should be encouraged (i.e., built into frameworks,
> etc)?
I think the potential for abuse has in some cases triggered a
knee-jerk "All cookies are evil!" reaction, but yeah. As long as they
don't replace things that ought to be in the URL, they're as RESTful
as any other header information the client might choose to send. (And
as long as the server recognizes that it's something the client
*might* choose to send, but might not, and doesn't treat it as a way
of storing application state.)
> Does it follow that personalization can BEST be achieved by having a
> cookie that contains the user's id sit on the browser and used to
> construct URLs for XMLHTTPRequests (e.g.,
> http://example.com/userdata/{user-id}) that will return data to be
> inserted into the page?
"Best" probably depends on the particular application (it's perhaps
not "best" if you consider HTTP transactions expensive, for instance,
since it requires a separate call for personalization data), but I
consider it a good way, and do make use of it.
On Sat, 1 Dec 2007, Elliotte Rusty Harold wrote: > pkeane wrote: > >> Actually, the URI DOES identify the resource and nothing else. It's the >> cookie (to be used only in the XHR 'personalizing" request) that handles >> identity (and NOT as part of the page request). Requesting the page URI >> returns the exact same resource no matter who the user is (and thus it's >> cacheable). It's the CLIENT that "decides" if it wants to take the next >> step and personalize the page by firing an XHR. >> > > There's a fuzzy issue here of just what exactly constitutes a resource. > There's a line beyond which sufficient client personalization has > created a new resource, and such a resource should have its own URL. > But I'd suggest that it's a resource that the server need not know anything about. For one thing, it won't be of any use to anyone but this particular user. Sort of like if I print out the web page -- that piece of paper is a new resource, but it need not have a URI since (aside from being a physical object, and not network-addressable) it only matters to me here & now. I'm trying to figure out where that outer edge of REST is. Where does it stop being about a state that needs any further respresentations transferred.... --Peter > Elliotte Rusty Harold elharo@... > Java I/O 2nd Edition Just Published! > http://www.cafeaulait.org/books/javaio2/ > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ >
On Sun, 2 Dec 2007, Elliotte Rusty Harold wrote: > pkeane wrote: > >> But I'd suggest that it's a resource that the server need not know anything >> about. For one thing, it won't be of any use to anyone but this particular >> user. Sort of like if I print out the web page -- that piece of paper is a >> new resource, but it need not have a URI since (aside from being a physical >> object, and not network-addressable) it only matters to me here & now. > > > If someone needs to know about it, then it needs a URI. There are many > servers and many clients, and one machine may be both in its life. There are > also semantic web and other applications that use URIs for identification > rather than location and resolution. Even if a URI is only of use to one user > (a supposition I suspect is very limiting in itself) it still needs a URI. > Most obviously, that one user may still want to bookmark it. > I suspect that we will see more and more pages that are "composed" of various services accessed by a page asynchronously (usually w/ XHR) and although my example happens to get all of the data from the same server, that needn't always be the case (same-origin policy aside...). You can just say AJAX is not RESTful and leave it at that, but that'd be a shame, since REST has plenty of good design principles to offer such a design. For me it comes down to this: The CLIENT needs to know "who" the user is across multiple requests (application state, which is A-OK as long as it lives on & is controlled by the client) so it can fire XHRs which are asking for THIS user's data. The two choices are a cookie (which I contend is not unRESTful if used in this way) OR always including the user-id in the page URL so that the server can encode that user-id in the hypertext which the client can then use to get more (personalized) data. I can imagine a page that offers a user a menu of services, and as each is selected, a cookie is set (by the CLIENT, using javascript) that causes that service & the data it offers added to the page by way of XHR. As long as the cookie is still there, the user always sees their own 'view' of the page. And the server knows nothing of the cookie(s) -- they are totally controlled by the user. My contention is that this a RESTful use of cookies -- I fear that the anti-cookies p.o.v. throws out some baby w/ the bathwater. By leveraging the intelligence of the client (the ability to set/keep/expire cookies) the application that sits on the server can be dramatically simplified as compared to the application that needs to keep track of all of the different "resources" that represent user preferences. The application is essentially a "services engine" that the client mixes & mashes as they wish.... -peter > -- > Elliotte Rusty Harold elharo@... > Java I/O 2nd Edition Just Published! > http://www.cafeaulait.org/books/javaio2/ > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ >
On Sun, 2 Dec 2007, Elliotte Rusty Harold wrote: > pkeane wrote: > >> For me it comes down to this: The CLIENT needs to know "who" the user is >> across multiple requests (application state, which is A-OK as long as it >> lives on & is controlled by the client) so it can fire XHRs which are >> asking for THIS user's data. The two choices are a cookie (which I contend >> is not unRESTful if used in this way) OR always including the user-id in >> the page URL so that the server can encode that user-id in the hypertext >> which the client can then use to get more (personalized) data. >> > > If the client is building the page/resource, then the client should be > assigning and managing URIs for those resources. The server does not need to > know about these URIs, but that doesn't mean no one needs to know about them. > Agreed to the extent that the client needs to manage those compound objects (i.e. new resources), but needing to assign them a URI, I'm not convinced. I think we're beyond the edge of finding benefit in REST constraints. E.g., if I print out a bunch of articles from xml.com & staple them together I have created a new resource. But there may or may not be any advantage to assigning that resource a URI. To suggest it is a bad idea for me to create that compound object because I have no intention of giving it a URI is where I see the problem. The browser can manage those resources any way it wishes and my contention is that cookies & XHR are the best tools we have right now to do that, and by effectively separating out that concern from the server opens up a world of possibilities/opportunities. > It is a fundamental principle of REST that resources are identified by URIs > and not by some combination of URI and some non-URI identifier like a cookie. > The URI alone should be sufficient. There are very good reasons for this > principle. > Agreed, but that's not what I am talking about here. In no case is there a "shared secret" communicated by way of a cookie. The cookie is used ONLY to construct a new url to access another resource. Whether I use a cookie or or some other mechanism is irrelevant. Perhaps "cookie" is now so loaded w/ bad connotations (and a history of misuse) that one dare not go there. Firefox now allows you to "bookmark" a set of open tabs. I see no reason not to offer that capability simply because it violates a REST constraint. > You seem to be getting hung up on a not-hugely relevant distinction between > client and server, In the scenario you describe, the browser is in effect > running a server, maybe not an HTTP server but a potentially RESTful server > nonetheless. To REST, a server is simply that which receives URIs and returns > representations. > Yes, agreed wholeheartedly and that's exactly where I am going with this. The browser will become increasingly more like a server (interesting that Roy F's latest talk "A Little REST and Relaxation" hints at a relaxation of the client/server constraint to be a useful experiment). I've been playing with the Firefox "POW" (plain old web server) plugin for image uploading to great effect (I don't "send" images to the server, rather I point to a local directory and let the server grab 'em). > Different URIs are served to the browser from different servers, one of which > is the browser itself; but this does not change the fundamental principle > that different resources should have different URIs. > But assigned by whom? The server or the browser-as-server? I see this as a separation-of-concerns issue. Building a restful web service need not require me (the guy developing the service on the server side) to worry about those URIs that might need to be assigned to new combinations of my resources -- that's a task for the "application" that lives on the client and is going to use my service. (note that in my original example the client-side "application" is the javascript that manages the cookies and fires off the XHRs). Ultimately, I feel like there is a great opportunity to create really useful and simple web services by offloading some of the organizing the creation of new compound objects onto the client, and that we have the tools -- not ideal, perhaps -- in cookies, javascript & XHR, to do just that. -peter > > -- > Elliotte Rusty Harold elharo@... > Java I/O 2nd Edition Just Published! > http://www.cafeaulait.org/books/javaio2/ > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/ >
if this is about how to determine the currently authenticated user
while still staying close the REST constraints, it seems trivial to
create a resource that returns the current auth'ed user id to the
client for further use in altering URIs.
for example an ajax request to the /user/current/ resource could
return a simple text/plain object ("my-name"). now the client can use
this data to construct other URI to get personalized data: GET
/users/{userid}/preferences
no cookies needed and standard caching rules will apply
in an even more RESTful implementation, the call to /users/current/
could return a document with links to all the other important data for
that user:
<ul class="user-data">
<li><a class="preferences" href="/user/my-name/preferences">preferences</a></li>
<li><a class="shopping-cart"
href="/user/my-name/shopping/E6$fhd">shopping cart<a></li>
</ul>
mca
On 12/2/07, pkeane <pkeane@...> wrote:
>
>
> On Sun, 2 Dec 2007, Elliotte Rusty Harold wrote:
>
> > pkeane wrote:
> >
> >> For me it comes down to this: The CLIENT needs to know "who" the user is
> >> across multiple requests (application state, which is A-OK as long as it
> >> lives on & is controlled by the client) so it can fire XHRs which are
> >> asking for THIS user's data. The two choices are a cookie (which I contend
> >> is not unRESTful if used in this way) OR always including the user-id in
> >> the page URL so that the server can encode that user-id in the hypertext
> >> which the client can then use to get more (personalized) data.
> >>
> >
> > If the client is building the page/resource, then the client should be
> > assigning and managing URIs for those resources. The server does not need to
> > know about these URIs, but that doesn't mean no one needs to know about them.
> >
>
> Agreed to the extent that the client needs to manage those compound
> objects (i.e. new resources), but needing to assign them a URI, I'm not
> convinced. I think we're beyond the edge of finding benefit in REST
> constraints. E.g., if I print out a bunch of articles from xml.com &
> staple them together I have created a new resource. But there may or may
> not be any advantage to assigning that resource a URI. To suggest it is a
> bad idea for me to create that compound object because I have no intention
> of giving it a URI is where I see the problem. The browser can manage
> those resources any way it wishes and my contention is that cookies & XHR
> are the best tools we have right now to do that, and by effectively
> separating out that concern from the server opens up a world of
> possibilities/opportunities.
>
> > It is a fundamental principle of REST that resources are identified by URIs
> > and not by some combination of URI and some non-URI identifier like a cookie.
> > The URI alone should be sufficient. There are very good reasons for this
> > principle.
> >
>
> Agreed, but that's not what I am talking about here. In no case is there
> a "shared secret" communicated by way of a cookie. The cookie is used
> ONLY to construct a new url to access another resource. Whether I use a
> cookie or or some other mechanism is irrelevant. Perhaps "cookie" is now
> so loaded w/ bad connotations (and a history of misuse) that one dare not
> go there. Firefox now allows you to "bookmark" a set of open tabs. I see
> no reason not to offer that capability simply because it violates a REST
> constraint.
>
> > You seem to be getting hung up on a not-hugely relevant distinction between
> > client and server, In the scenario you describe, the browser is in effect
> > running a server, maybe not an HTTP server but a potentially RESTful server
> > nonetheless. To REST, a server is simply that which receives URIs and returns
> > representations.
> >
>
> Yes, agreed wholeheartedly and that's exactly where I am going with this.
> The browser will become increasingly more like a server (interesting that
> Roy F's latest talk "A Little REST and Relaxation" hints at a relaxation
> of the client/server constraint to be a useful experiment). I've been
> playing with the Firefox "POW" (plain old web server) plugin for image
> uploading to great effect (I don't "send" images to the server, rather I
> point to a local directory and let the server grab 'em).
>
> > Different URIs are served to the browser from different servers, one of which
> > is the browser itself; but this does not change the fundamental principle
> > that different resources should have different URIs.
> >
>
> But assigned by whom? The server or the browser-as-server? I see this as
> a separation-of-concerns issue. Building a restful web service need not
> require me (the guy developing the service on the server side) to worry
> about those URIs that might need to be assigned to new combinations of my
> resources -- that's a task for the "application" that lives on the client
> and is going to use my service. (note that in my original example the
> client-side "application" is the javascript that manages the cookies and
> fires off the XHRs).
>
> Ultimately, I feel like there is a great opportunity to create really
> useful and simple web services by offloading some of the organizing the
> creation of new compound objects onto the client, and that we have the
> tools -- not ideal, perhaps -- in cookies, javascript & XHR, to do just
> that.
>
> -peter
>
> >
> > --
> > Elliotte Rusty Harold elharo@...
> > Java I/O 2nd Edition Just Published!
> > http://www.cafeaulait.org/books/javaio2/
> > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
> >
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
On Sun, 2 Dec 2007, mike amundsen wrote:
> if this is about how to determine the currently authenticated user
> while still staying close the REST constraints, it seems trivial to
> create a resource that returns the current auth'ed user id to the
> client for further use in altering URIs.
>
> for example an ajax request to the /user/current/ resource could
> return a simple text/plain object ("my-name"). now the client can use
> this data to construct other URI to get personalized data: GET
> /users/{userid}/preferences
>
> no cookies needed and standard caching rules will apply
>
> in an even more RESTful implementation, the call to /users/current/
> could return a document with links to all the other important data for
> that user:
>
> <ul class="user-data">
> <li><a class="preferences" href="/user/my-name/preferences">preferences</a></li>
> <li><a class="shopping-cart"
> href="/user/my-name/shopping/E6$fhd">shopping cart<a></li>
> </ul>
>
> mca
>
This presuposes that the request to /user/current/ will run into a
"hark-who-goes-there" WWW-Authenticate response, right? And that the
browser's Auth cache will come into play in order to respond to the
challenge. I think it's a great idea, and it leverages the HTTP
Authorization framework which seems better all around.
I could, in fact, simply make the request to /user/current/data/ and get
it all in one json response, right?
I'll be curious to know what others think, but it sounds to me like a
"best practice" solution for my use case.
thanks!
Peter
>
> On 12/2/07, pkeane <pkeane@...> wrote:
>>
>>
>> On Sun, 2 Dec 2007, Elliotte Rusty Harold wrote:
>>
>>> pkeane wrote:
>>>
>>>> For me it comes down to this: The CLIENT needs to know "who" the user is
>>>> across multiple requests (application state, which is A-OK as long as it
>>>> lives on & is controlled by the client) so it can fire XHRs which are
>>>> asking for THIS user's data. The two choices are a cookie (which I contend
>>>> is not unRESTful if used in this way) OR always including the user-id in
>>>> the page URL so that the server can encode that user-id in the hypertext
>>>> which the client can then use to get more (personalized) data.
>>>>
>>>
>>> If the client is building the page/resource, then the client should be
>>> assigning and managing URIs for those resources. The server does not need to
>>> know about these URIs, but that doesn't mean no one needs to know about them.
>>>
>>
>> Agreed to the extent that the client needs to manage those compound
>> objects (i.e. new resources), but needing to assign them a URI, I'm not
>> convinced. I think we're beyond the edge of finding benefit in REST
>> constraints. E.g., if I print out a bunch of articles from xml.com &
>> staple them together I have created a new resource. But there may or may
>> not be any advantage to assigning that resource a URI. To suggest it is a
>> bad idea for me to create that compound object because I have no intention
>> of giving it a URI is where I see the problem. The browser can manage
>> those resources any way it wishes and my contention is that cookies & XHR
>> are the best tools we have right now to do that, and by effectively
>> separating out that concern from the server opens up a world of
>> possibilities/opportunities.
>>
>>> It is a fundamental principle of REST that resources are identified by URIs
>>> and not by some combination of URI and some non-URI identifier like a cookie.
>>> The URI alone should be sufficient. There are very good reasons for this
>>> principle.
>>>
>>
>> Agreed, but that's not what I am talking about here. In no case is there
>> a "shared secret" communicated by way of a cookie. The cookie is used
>> ONLY to construct a new url to access another resource. Whether I use a
>> cookie or or some other mechanism is irrelevant. Perhaps "cookie" is now
>> so loaded w/ bad connotations (and a history of misuse) that one dare not
>> go there. Firefox now allows you to "bookmark" a set of open tabs. I see
>> no reason not to offer that capability simply because it violates a REST
>> constraint.
>>
>>> You seem to be getting hung up on a not-hugely relevant distinction between
>>> client and server, In the scenario you describe, the browser is in effect
>>> running a server, maybe not an HTTP server but a potentially RESTful server
>>> nonetheless. To REST, a server is simply that which receives URIs and returns
>>> representations.
>>>
>>
>> Yes, agreed wholeheartedly and that's exactly where I am going with this.
>> The browser will become increasingly more like a server (interesting that
>> Roy F's latest talk "A Little REST and Relaxation" hints at a relaxation
>> of the client/server constraint to be a useful experiment). I've been
>> playing with the Firefox "POW" (plain old web server) plugin for image
>> uploading to great effect (I don't "send" images to the server, rather I
>> point to a local directory and let the server grab 'em).
>>
>>> Different URIs are served to the browser from different servers, one of which
>>> is the browser itself; but this does not change the fundamental principle
>>> that different resources should have different URIs.
>>>
>>
>> But assigned by whom? The server or the browser-as-server? I see this as
>> a separation-of-concerns issue. Building a restful web service need not
>> require me (the guy developing the service on the server side) to worry
>> about those URIs that might need to be assigned to new combinations of my
>> resources -- that's a task for the "application" that lives on the client
>> and is going to use my service. (note that in my original example the
>> client-side "application" is the javascript that manages the cookies and
>> fires off the XHRs).
>>
>> Ultimately, I feel like there is a great opportunity to create really
>> useful and simple web services by offloading some of the organizing the
>> creation of new compound objects onto the client, and that we have the
>> tools -- not ideal, perhaps -- in cookies, javascript & XHR, to do just
>> that.
>>
>> -peter
>>
>>>
>>> --
>>> Elliotte Rusty Harold elharo@...
>>> Java I/O 2nd Edition Just Published!
>>> http://www.cafeaulait.org/books/javaio2/
>>> http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
>>>
>>
>>
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>
>
> --
> mca
> "In a time of universal deceit, telling the truth becomes a
> revolutionary act. " (George Orwell)
>
On 12/2/07, mike amundsen <mamund@...> wrote:
> for example an ajax request to the /user/current/ resource could
> return a simple text/plain object ("my-name"). now the client can use
> this data to construct other URI to get personalized data: GET
> /users/{userid}/preferences
However, now you've created a system where users cannot pass a
bookmark to someone else.
JoAnn.com, a pretty major player in the sewing/craft industry, hands
out URLs that include a session ID. If I had a nickel for every time
someone had enthusiastically posted an URL in a craftblog, referring
readers to this page or a variant thereof:
http://www.joann.com/joann/common/session_expire_error.jsp;jsessionid=U0VRVRT1ITB0YP4SY5NFAFR50LD3KUPU
... I would be a rich person. (And go to that link, and notice where
the "click here" takes you. Obviously the inclusion of the sessionid
is *not* the only problem going on there. Yikes. Can you *get* more
annoying than that?)
But I digress. The inability to share links was the main reason I
keep user-specific URLs an absolute minimum.
two things:
peter:
i was thinking that the /user/current/ request would *not* return a
401. this could be a public resource that returns the auth'ed user
*or* 'anonymous.' that way, it works for any public pages in your app.
but i can see how you might use it to start the login process, etc...
karen:
yes, if you plan on creating resources that *depend* on the preference
data, they will not bookmark well. however, the resource that responds
to the /user/current/ request can also return anonymous information.
this would make the pages 'work' for bookmarked content, but possibly
contain different data.
for example, if i try to bookmark my home page (that includes
user/current/ data via ajax) and share that with pals ("look at what's
in my shopping cart!"). but that's the same if i tried to share a
direct link to my cart with my pals, too.
the key is to be sure to *not* do what JoAnn.com does - make *public
content* dependent on the current user, right?
mca
On 12/2/07, Karen <karen.cravens@...> wrote:
> On 12/2/07, mike amundsen <mamund@...> wrote:
> > for example an ajax request to the /user/current/ resource could
> > return a simple text/plain object ("my-name"). now the client can use
> > this data to construct other URI to get personalized data: GET
> > /users/{userid}/preferences
>
> However, now you've created a system where users cannot pass a
> bookmark to someone else.
>
> JoAnn.com, a pretty major player in the sewing/craft industry, hands
> out URLs that include a session ID. If I had a nickel for every time
> someone had enthusiastically posted an URL in a craftblog, referring
> readers to this page or a variant thereof:
>
> http://www.joann.com/joann/common/session_expire_error.jsp;jsessionid=U0VRVRT1ITB0YP4SY5NFAFR50LD3KUPU
>
> ... I would be a rich person. (And go to that link, and notice where
> the "click here" takes you. Obviously the inclusion of the sessionid
> is *not* the only problem going on there. Yikes. Can you *get* more
> annoying than that?)
>
> But I digress. The inability to share links was the main reason I
> keep user-specific URLs an absolute minimum.
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
On Sun, 2 Dec 2007, Karen wrote:
> On 12/2/07, mike amundsen <mamund@...> wrote:
>> for example an ajax request to the /user/current/ resource could
>> return a simple text/plain object ("my-name"). now the client can use
>> this data to construct other URI to get personalized data: GET
>> /users/{userid}/preferences
>
> However, now you've created a system where users cannot pass a
> bookmark to someone else.
But in the scenario described, the original URL still works after being
passed to someone else. This new user simply sees a non-personalized
site and will probably be asked to login to get their own personalized
version. (And note that the server does NOT vary the response based on
auth information for the original page, only the secondary XHR to
/usr/current/data/ which seems OK to me.)
Of course this is for a general page that has some personalized bits in
it. A link to a user's cart would certainly need to have that user's
user-id in the URL and another user trying to access it will either be
allowed to see it or not.
>
> JoAnn.com, a pretty major player in the sewing/craft industry, hands
> out URLs that include a session ID. If I had a nickel for every time
> someone had enthusiastically posted an URL in a craftblog, referring
> readers to this page or a variant thereof:
>
> http://www.joann.com/joann/common/session_expire_error.jsp;jsessionid=U0VRVRT1ITB0YP4SY5NFAFR50LD3KUPU
>
Yuck!
> ... I would be a rich person. (And go to that link, and notice where
> the "click here" takes you. Obviously the inclusion of the sessionid
> is *not* the only problem going on there. Yikes. Can you *get* more
> annoying than that?)
>
> But I digress. The inability to share links was the main reason I
> keep user-specific URLs an absolute minimum.
>
On Sun, 2 Dec 2007, mike amundsen wrote:
> two things:
>
> peter:
> i was thinking that the /user/current/ request would *not* return a
> 401. this could be a public resource that returns the auth'ed user
> *or* 'anonymous.' that way, it works for any public pages in your app.
> but i can see how you might use it to start the login process, etc...
>
Oh yes, of course -- my mistake. The server has access to the currently
logged in user (in php by way of the $_SERVER['PHP_AUTH_USER']). I had
forgotten that. So this is really sounding like a good solution...
> karen:
> yes, if you plan on creating resources that *depend* on the preference
> data, they will not bookmark well. however, the resource that responds
> to the /user/current/ request can also return anonymous information.
> this would make the pages 'work' for bookmarked content, but possibly
> contain different data.
>
In my use case, the resource certainly does NOT depend on the preference
data. The /user/current/ data simply adds a few links to the page that
this particular use might want/be authorized to follow.
-peter
> for example, if i try to bookmark my home page (that includes
> user/current/ data via ajax) and share that with pals ("look at what's
> in my shopping cart!"). but that's the same if i tried to share a
> direct link to my cart with my pals, too.
>
> the key is to be sure to *not* do what JoAnn.com does - make *public
> content* dependent on the current user, right?
>
> mca
>
>
> On 12/2/07, Karen <karen.cravens@...> wrote:
>> On 12/2/07, mike amundsen <mamund@...> wrote:
>>> for example an ajax request to the /user/current/ resource could
>>> return a simple text/plain object ("my-name"). now the client can use
>>> this data to construct other URI to get personalized data: GET
>>> /users/{userid}/preferences
>>
>> However, now you've created a system where users cannot pass a
>> bookmark to someone else.
>>
>> JoAnn.com, a pretty major player in the sewing/craft industry, hands
>> out URLs that include a session ID. If I had a nickel for every time
>> someone had enthusiastically posted an URL in a craftblog, referring
>> readers to this page or a variant thereof:
>>
>> http://www.joann.com/joann/common/session_expire_error.jsp;jsessionid=U0VRVRT1ITB0YP4SY5NFAFR50LD3KUPU
>>
>> ... I would be a rich person. (And go to that link, and notice where
>> the "click here" takes you. Obviously the inclusion of the sessionid
>> is *not* the only problem going on there. Yikes. Can you *get* more
>> annoying than that?)
>>
>> But I digress. The inability to share links was the main reason I
>> keep user-specific URLs an absolute minimum.
>>
>>
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>
>
> --
> mca
> "In a time of universal deceit, telling the truth becomes a
> revolutionary act. " (George Orwell)
>
On 12/2/07, pkeane <pkeane@...> wrote:
> But in the scenario described, the original URL still works after being
> passed to someone else. This new user simply sees a non-personalized
Not the bits that are things like /users/{userid}/preferences, though.
> Of course this is for a general page that has some personalized bits in
> it. A link to a user's cart would certainly need to have that user's
> user-id in the URL and another user trying to access it will either be
> allowed to see it or not.
Right.
I guess the problem really lies in the general pages with
personalization. Since I've been poking around a vBulletin
installation this weekend, I'll go with what's on my mind (well,
mostly what's on my mind after that is "OH MY GOSH YOU'RE DOING
EVERYTHING WITH GET DO YOU HAVE ANY IDEA WHAT THAT COULD DO" but
that's neither here nor there, though I suppose it might explain why I
seem to be on such a "ranting about poor UI design" kick today): a web
forum.
Say you're displaying a thread. If it's a logged-in person, you know
which of the posts that person has already seen; if it's not a
logged-in person, you can just say "pretend anything older than N days
is marked read" and fake it. You don't want a custom URL for the
thread, but one little flag on each entry is potentially going to be
different for different visitors.
> Yuck!
Yeah. It boggles the mind. In cases like this (JoAnn is not the only
offender; Simplicity doesn't have direct links to individual sewing
patterns, another common blog topic) I have been known to manually
construct urls to make requests like "GET
http://www.joann.com/YOUR_USER_INTERFACE_NEEDS_IMPROVEMENT", often
followed with specific suggestions.
I am pretty sure this is RESTful; I'm using a GET but I can't,
unfortunately, depend on the hoped-for side effects...
On 12/2/07, Karen <karen.cravens@...> wrote:
<snip>
I have been known to manually
> construct urls to make requests like "GET
> http://www.joann.com/YOUR_USER_INTERFACE_NEEDS_IMPROVEMENT", often
> followed with specific suggestions.
>
> I am pretty sure this is RESTful; I'm using a GET but I can't,
> unfortunately, depend on the hoped-for side effects...
</snip>
LOL!
<snip>
Say you're displaying a thread. If it's a logged-in person, you know
which of the posts that person has already seen; if it's not a
logged-in person, you can just say "pretend anything older than N days
is marked read" and fake it. You don't want a custom URL for the
thread, but one little flag on each entry is potentially going to be
different for different visitors.
</snip>
yeah, i challenge when you think about unique resources having unique
URIs. seems to me that you can use the /user/current/ pattern to pull
the info on the last post read by the auth-ed user and then allow the
client to alter the view of the data accordingly. this is the way the
desktop verson of most forum readers works, right? (but the
/user/current/ is just a local state file).
so you still can have unique resources (the list of posts from
{page-start} to {page-end}) but allow clients to use their
personalization data to alter the view presented to the user.
mca
On Mon, 26 Nov 2007, A. Pagaltzis wrote:
> * Assaf Arkin <assaf@...> [2007-11-26 07:20]:
>> Out of, curiosity what shopping sites use HTTP authentication,
>> that you no longer need a cookie after you log in?
>
> None. HTTP auth in browsers is too broken to rely on it,
> unfortunately. :-(
>
> One thing is the interop shakiness you run into if you go beyond
> Basic Auth.
>
> But the worst part is logging out. PAUSE (Perl Authors Upload
There's a thread going now ("are cookies EVER restful") that seems to be
settling on HTTP auth as a good alternative to cookies for remembering the
logged in user. Some folks
(http://www.peej.co.uk/articles/http-auth-with-html-forms.html) have
formulated pretty good ideas for making it work. I also saw a blog
entry (cannot find it just now) that has a nice method for logoff: have
ONE route in your application accept "logoff" as the user and make sure
all the OTHER pages reject (401) "logoff" as a user. Logoff is as simple
as firing an XHR to that logoff page with "logoff" as the user and then
redirecting to another page. Seems pretty straightforward to me (and I'll
be exploring this, but I think it'll work across all browsers that support
XMLHTTPRequest (XHR).
So might this be a good, standard method for remebering the user? (Better,
at least, than cookies?).
-peter
On 12/2/07, mike amundsen <mamund@...> wrote: > yeah, i challenge when you think about unique resources having unique > URIs. seems to me that you can use the /user/current/ pattern to pull > the info on the last post read by the auth-ed user and then allow the > client to alter the view of the data accordingly. That's the solution I've leaned toward, other than the issue of dumb clients (e.g. browsers without JavaScript). > this is the way the > desktop verson of most forum readers works, right? (but the > /user/current/ is just a local state file). Like a newsreader's newsrc file, you mean? Because yeah, exactly, is what I'd do with JS *if* I could depend on it being there, and didn't mind the overhead of two http requests: pull the newsrc-line equivalent with a user-specific URL (to continue the newsrc analogy, that would be part of the subscription resource), and run down the list of posts on the thread page and flag all the ones that are unread based on that newsrc. Except then that leads to the question of whether it's really much of a leap to ask the server to do combining that for you, to support those dumb clients. Perhaps http://domain/threadlocationbit for the canonical and ...threadlocationbit?someflag=set for the customized one, where the latter returns the canonical version in the Content-Location header, and such other stuff as is needed to make sure smarter clients don't mistake it for other than the compromise it is.
karen: without thinking through all the details it seems that, if you want to support 'dumb' (read:non-scripted) clients, you are bound to build all the state into the delivered resource (a <form> with hidden elements for html). in that case, you have a unique resource that requires a unique URI. the querystring argument model can help in creating unique URIs, but only to a point, right? in the case where you use qs details to carry or identify state, you always need to make sure the 'naked' resource URI results in something resonable for the client. not having code-on-demand for the client can really complicate the work. mca On 12/2/07, Karen <karen.cravens@...> wrote: <snip> > Like a newsreader's newsrc file, you mean? Because yeah, exactly, is > what I'd do with JS *if* I could depend on it being there, and didn't > mind the overhead of two http requests: pull the newsrc-line > equivalent with a user-specific URL (to continue the newsrc analogy, > that would be part of the subscription resource), and run down the > list of posts on the thread page and flag all the ones that are unread > based on that newsrc. > > Except then that leads to the question of whether it's really much of > a leap to ask the server to do combining that for you, to support > those dumb clients. Perhaps http://domain/threadlocationbit for the > canonical and ...threadlocationbit?someflag=set for the customized > one, where the latter returns the canonical version in the > Content-Location header, and such other stuff as is needed to make > sure smarter clients don't mistake it for other than the compromise it > is. </snip>
* pkeane <pkeane@...> [2007-12-02 23:20]: > So might this be a good, standard method for remebering > the user? (Better, at least, than cookies?). Not to me. Without control of the clients, I would never mandate Javascript for a function as vital as logging out. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Mon, 3 Dec 2007, A. Pagaltzis wrote: > * pkeane <pkeane@...> [2007-12-02 23:20]: >> So might this be a good, standard method for remebering >> the user? (Better, at least, than cookies?). > > Not to me. Without control of the clients, I would never > mandate Javascript for a function as vital as logging out. > I was thinking more along the lines of http auth as as better option than cookies as a way to "remember" the user across requests w/o embedding a user-id in the url. (I realize this may not be entirely REStful)*. The javascript thing was more about making it convenient for folks that want that. Regular old http auth logout (i.e., a new login box next time a protected resource is accessed would still work if this thing was coded properly). Do you have a recommendation for this use case? I feel like if the REST community can suggest some good practices here, one of the stumbling blocks for folks new to REST will be cleared somewhat. I think the three options are: http auth, cookies, or identity-in-url. -Peter * a nice quote from Adam Taft: http://permalink.gmane.org/gmane.comp.java.restlet/3180 "First of all, I believe people need to get over this concept of "logging in." For a RESTful request, there really is no such thing; logging in implies server state and sessions, which of course is not RESTful. When you request a protected resource, the server should simply expect proper authentication headers to be included in the request." > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> >
On 12/2/07, mike amundsen <mamund@...> wrote: > without thinking through all the details it seems that, if you want to > support 'dumb' (read:non-scripted) clients, you are bound to build all > the state into the delivered resource (a <form> with hidden elements > for html). in that case, you have a unique resource that requires a > unique URI. I guess I'm going to break RESTfulness in that case, because I'm not seeing a benefit to uniqueness at that point, and I *am* seeing drawbacks (inability to pass bookmarks around, a nontrivial consideration for a community-oriented site). I consider the pre-filled form to be non-content fluff (there's no equivalent to it in the other versions - the JSON and such), just a helpful pre-rearrangement of the hard information available elsewhere. > not having code-on-demand for the client can really complicate the work. Very much so. Also, not being able to rely on standards compliance. (Could be worse. HTTP clients are *way* more compliant to their standards than NNTP clients/servers are to theirs.)
Hello, I was hoping to use XHTML as a representation format for all the great reasons I read about in Richardson and Ruby's book, but I have a question: Lots of the data I hope to represent would cause the XHTML to become invalid. Now, in my average web development I would just escape these with HTML entities. However, my programmatic clients now have to jump through some sort of unescaping hoop to get that data back into it's original format. To me this seems like a frustrating excercise for the client. I would expect after Xpath-ing out a piece of data I should be more or less ready to go. I've looked into using CDATA blocks instead, but in FF/IE the content is then not displayed at all. Any advice? Is this escape/unescape game just a part of the cost? Thanks, -Miles
As a point of RESTful design, you should try to avoid formats which aren't internet standards. If that's not feasible in this case, which is totally possible, you may be able to do what you need using semantic HTML. If you need to go beyond what H1 / Q / ABBR etc can do, you may be able to layer your format on top of semantic HTML using classes and link[@rel]. On Dec 4, 2007 3:18 PM, Miles Crawford <mcrawfor@...> wrote: > Hello, > > I was hoping to use XHTML as a representation format for all the great > reasons I read about in Richardson and Ruby's book, but I have a > question: > > Lots of the data I hope to represent would cause the XHTML to become > invalid. Now, in my average web development I would just escape these > with HTML entities. > > However, my programmatic clients now have to jump through some sort of > unescaping hoop to get that data back into it's original format. > > To me this seems like a frustrating excercise for the client. I would > expect after Xpath-ing out a piece of data I should be more or less > ready to go. > > I've looked into using CDATA blocks instead, but in FF/IE the content > is then not displayed at all. > > Any advice? Is this escape/unescape game just a part of the cost? > > Thanks, > -Miles > > > > Yahoo! Groups Links > > > >
Miles, Most of the XML tools and technologies I've worked with automatically handle escaping and unescaping. For example, if you set the content of a paragraph element to "AT&T", it will be marshalled as <p>AT&T</p>. This is certainly true with Java data binding approaches like Castor and JAX-B, and I suspect it's true for the DOM and templating approaches as well. Is that what you're getting at? Kevin Christen "Miles Crawford" <mcrawfor@...> Sent by: rest-discuss@yahoogroups.com 12/04/2007 05:18 PM To rest-discuss@yahoogroups.com cc Subject [rest-discuss] Keeping XHTML valid Hello, I was hoping to use XHTML as a representation format for all the great reasons I read about in Richardson and Ruby's book, but I have a question: Lots of the data I hope to represent would cause the XHTML to become invalid. Now, in my average web development I would just escape these with HTML entities. However, my programmatic clients now have to jump through some sort of unescaping hoop to get that data back into it's original format. To me this seems like a frustrating excercise for the client. I would expect after Xpath-ing out a piece of data I should be more or less ready to go. I've looked into using CDATA blocks instead, but in FF/IE the content is then not displayed at all. Any advice? Is this escape/unescape game just a part of the cost? Thanks, -Miles
Could you use an HTTP Link: header (http://esw.w3.org/topic/LinkHeader) to get the WADL if you have the URI of the resource? For example: Link: meta.wadl; rel=meta - John Marc Hadley wrote: > > > On Sep 11, 2007, at 2:42 PM, Mark Baker wrote: > >> Marc - I think GET's more appropriate, because that WADL is, in >> effect, a form, and forms should be first class hypermedia >> representations returned by dereferencing a URI. >> > Right, there's something meta about WADL that made me think that > OPTIONS would be a good choice but I take your point. > > Marc. > >> >> On 9/11/07, Marc Hadley <hadley@... <mailto:hadley%40sun.com>> wrote: >>> On Sep 11, 2007, at 12:19 PM, Griffin Caprio wrote: >>> >>>> OPTIONS would be interesting. I've just been using it to return >>>> acceptable methods, a la "Allow: Post, Get". I supposed returning >>>> a resource would work too. As first thought, something like: >>>> >>>> http://www.foo.com/ > <http://www.foo.com/><resource>/<action>/<request,response> >>>> >>>> would return the representation format for a particular actions >>>> request or response. But this doesn't feel to RESTy to me. >>>> >>> I meant returning a WADL[1] resource description, e.g.: >>> >>> OPTIONS on http://foo.com/resource <http://foo.com/resource> would > yield a response with the >>> allow header and the following in the response entity body:
Is that header still valid? I can't seem to find much info about it anywhere. - Griffin On Dec 5, 2007, at 9:54 AM, John Kemp wrote: > Could you use an HTTP Link: header (http://esw.w3.org/topic/ > LinkHeader) > to get the WADL if you have the URI of the resource? > > For example: > > Link: meta.wadl; rel=meta > > - John > > Marc Hadley wrote: >> >> >> On Sep 11, 2007, at 2:42 PM, Mark Baker wrote: >> >>> Marc - I think GET's more appropriate, because that WADL is, in >>> effect, a form, and forms should be first class hypermedia >>> representations returned by dereferencing a URI. >>> >> Right, there's something meta about WADL that made me think that >> OPTIONS would be a good choice but I take your point. >> >> Marc. >> >>> >>> On 9/11/07, Marc Hadley <hadley@... <mailto:hadley%40sun.com>> >>> wrote: >>>> On Sep 11, 2007, at 12:19 PM, Griffin Caprio wrote: >>>> >>>>> OPTIONS would be interesting. I've just been using it to return >>>>> acceptable methods, a la "Allow: Post, Get". I supposed returning >>>>> a resource would work too. As first thought, something like: >>>>> >>>>> http://www.foo.com/ >> <http://www.foo.com/><resource>/<action>/<request,response> >>>>> >>>>> would return the representation format for a particular actions >>>>> request or response. But this doesn't feel to RESTy to me. >>>>> >>>> I meant returning a WADL[1] resource description, e.g.: >>>> >>>> OPTIONS on http://foo.com/resource <http://foo.com/resource> would >> yield a response with the >>>> allow header and the following in the response entity body: >
Griffin Caprio wrote: > > > Is that header still valid? I can't seem to find much info about it > anywhere. It seems to be under discussion in the IETF HTTP WG - http://lists.w3.org/Archives/Public/ietf-http-wg/2007OctDec/thread.html#msg46 Regards, - John > > - Griffin > On Dec 5, 2007, at 9:54 AM, John Kemp wrote: > >> Could you use an HTTP Link: header (http://esw.w3.org/topic/ > <http://esw.w3.org/topic/> >> LinkHeader) >> to get the WADL if you have the URI of the resource? >> >> For example: >> >> Link: meta.wadl; rel=meta >> >> - John >> >> Marc Hadley wrote: >>> >>> >>> On Sep 11, 2007, at 2:42 PM, Mark Baker wrote: >>> >>>> Marc - I think GET's more appropriate, because that WADL is, in >>>> effect, a form, and forms should be first class hypermedia >>>> representations returned by dereferencing a URI. >>>> >>> Right, there's something meta about WADL that made me think that >>> OPTIONS would be a good choice but I take your point. >>> >>> Marc. >>> >>>> >>>> On 9/11/07, Marc Hadley <hadley@... <mailto:hadley%40sun.com> > <mailto:hadley%40sun.com>> >>>> wrote: >>>>> On Sep 11, 2007, at 12:19 PM, Griffin Caprio wrote: >>>>> >>>>>> OPTIONS would be interesting. I've just been using it to return >>>>>> acceptable methods, a la "Allow: Post, Get". I supposed returning >>>>>> a resource would work too. As first thought, something like: >>>>>> >>>>>> http://www.foo.com/ <http://www.foo.com/> >>> <http://www.foo.com/ > <http://www.foo.com/>><resource>/<action>/<request,response> >>>>>> >>>>>> would return the representation format for a particular actions >>>>>> request or response. But this doesn't feel to RESTy to me. >>>>>> >>>>> I meant returning a WADL[1] resource description, e.g.: >>>>> >>>>> OPTIONS on http://foo.com/resource <http://foo.com/resource> > <http://foo.com/resource <http://foo.com/resource>> would >>> yield a response with the >>>>> allow header and the following in the response entity body: >> > >
* pkeane <pkeane@...> [2007-12-04 00:20]: > On Mon, 3 Dec 2007, A. Pagaltzis wrote: > > * pkeane <pkeane@...> [2007-12-02 23:20]: > >> So might this be a good, standard method for remebering > >> the user? (Better, at least, than cookies?). > > > > Not to me. Without control of the clients, I would never > > mandate Javascript for a function as vital as logging out. > > I was thinking more along the lines of http auth as as better > option than cookies as a way to "remember" the user across > requests w/o embedding a user-id in the url. Sure. I’m just saying that when your clients are just browsers, whose HTTP auth implementation is broken, and you need Javascript to make it work, then you can’t lose the cookie option entirely simply because mandatory Javascript is a no-no. > http://permalink.gmane.org/gmane.comp.java.restlet/3180 > "First of all, I believe people need to get over this concept > of "logging in." For a RESTful request, there really is no > such thing; logging in implies server state and sessions, which > of course is not RESTful. When you request a protected > resource, the server should simply expect proper authentication > headers to be included in the request." Absolutely. I don’t use cookies as session keys. The only thing I use them for is to pass along auth credentials and a HMAC digest that the server can verify as a) corresponding to each other and b) originating from itself, both without having to store anything – essentially this is a propriertary HTTP Auth protocol tunnelled over the cookie header. There’s no hidden state stored on the server at all, so no REST constraint is violated. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* pkeane <pkeane@...> [2007-12-01 17:30]:
> Does it follow that personalization can BEST be achieved by
> having a cookie that contains the user's id sit on the browser
> and used to construct URLs for XMLHTTPRequests (e.g.,
> http://example.com/userdata/{user-id}) that will return data to
> be inserted into the page?
So, after reading the rest of the thread, it appears that you
want to use cookies not for sending them to the server (which
doesn’t care about them), but merely for using them as an offline
storage mechanism for Javascript, so as to make dumb clients a
little smarter. Well, that’s application state, and keeping
application state on the client is half the definition of REST.
It also has nothing to do with the use case that the REST book
talks about on p.253 that you quoted.
What I don’t like so much about your description is that the
Javascript’s going to CONSTRUCT user-specific URIs based on the
ID in the cookie. Normally, this would be a violation of the
hypermedia constraint. OTOH, forms involve URI construction, but
they’re OK because they’re hypermedia published by the server;
and here we have Code on Demand, which in a way is a Turing-
complete kind of form. Coupling, which is what hypermedia aims
to resolve, is not an issue, since the code is published by the
server anyway and so can change whenever the server does.
So it’s not critical for your Javascript to follow links as
opposed to constructing URIs. Whether or not it should do so
depends on the flexibility you need to build into it. If you
construct URIs, then the Javascript needs to have awareness of
the URI space built into it, so if your URI space layout grows
complex, so will the Javascript. Then it would probably be better
for the Javascript to use hypermedia. However as long as the URI
space layout is simple, it may not be worth the bother to bake
hypermedia into the Javascript.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
* Elliotte Rusty Harold <elharo@...> [2007-12-02 02:00]: > pkeane wrote: > > Actually, the URI DOES identify the resource and nothing > > else. It's the cookie (to be used only in the XHR > > 'personalizing" request) that handles identity (and NOT as > > part of the page request). Requesting the page URI returns > > the exact same resource no matter who the user is (and thus > > it's cacheable). It's the CLIENT that "decides" if it wants > > to take the next step and personalize the page by firing an > > XHR. > > There's a fuzzy issue here of just what exactly constitutes a > resource. That’s why resources and representations are not the same thing. The line is not fuzzy, it’s completely arbitrary. REST itself is no guide to where to draw it; the usefulness of the resulting system is what will dictate the definition of a resource. > There's a line beyond which sufficient client personalization > has created a new resource, and such a resource should have its > own URL. Should it? Depends. This is just like the question of how to handle a document available in multiple languages. Do you use content negotiation or put each version behind an explicit URI? Doing the latter is not a magical fix-all solution. Both ways of going about it have their upsides and downsides and need to be weighed in context. And sometimes you’ll want to do some kind of mixture of both. F.ex., in an e-shop site, I would ask: if a user bookmarks a product page, or emails it to someone, what is it that they expect to point the other person to? Is the fact that the page had <a href="/user/23831">My account</a> at the top relevant at the time of the emailed link being clicked? Well, probably not. Part of this all is that there is no universal way for anyone but the origin server to know with certainty that two URIs identify the same resource. If 30 people bookmark my e-shop’s product page for caffeinated soap on del.icio.us, I’d want all their URIs to be identical so that their commentary on the product is grouped together as it should be. People could also find further commentary from various people around the web by searching for link:www.thinkgeek.com/caffeine/accessories/5a65/ in Google. If you mint too many URIs, then this identifiability gets lost. Think rdf:about. You want canonical URIs for as many *different* things as possible, but you also want to avoid duplicate URIs for the *same one thing* as much as possible. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
>>>>> "A" == A Pagaltzis <pagaltzis@...> writes:
A> Absolutely. I don’t use cookies as session keys. The only thing I
A> use them for is to pass along auth credentials and a HMAC digest
A> that the server can verify as a) corresponding to each other and
A> b) originating from itself, both without having to store anything
A> – essentially this is a propriertary HTTP Auth protocol tunnelled
A> over the cookie header. There’s no hidden state stored on the
A> server at all, so no REST constraint is violated.
But the problem is your custom solution cannot be easily integrated with
any other known tool. It's hard to automate access to your site,
everyone who wants to access your site by automated means needs to
custom built access protocols.
I don't see any value in that, except stubbornness or unawareness of the
people who develop those arcane authentication protocols. And you can
bet it's full of security holes.
--
Cheers,
Berend de Boer
* Berend de Boer <berend@...> [2007-12-06 21:00]: >* A Pagaltzis <pagaltzis@...> writes: >> Absolutely. I don’t use cookies as session keys. The only >> thing I use them for is to pass along auth credentials and a >> HMAC digest that the server can verify as a) corresponding to >> each other and b) originating from itself, both without having >> to store anything – essentially this is a propriertary HTTP >> Auth protocol tunnelled over the cookie header. There’s no >> hidden state stored on the server at all, so no REST >> constraint is violated. > > But the problem is your custom solution cannot be easily > integrated with any other known tool. It's hard to automate > access to your site, everyone who wants to access your site by > automated means needs to custom built access protocols. Really? The login form is served as text/html over plain old HTTP. If the credentials are correct, the server sends a response with a cookie. Every browser under the sun understands that. What browsers do you know of that would have trouble with this? > I don't see any value in that, except stubbornness or > unawareness of the people who develop those arcane > authentication protocols. And you can bet it's full of > security holes. Ah, is it? Well, HMAC digests are not my brainchild. If you know of a weakness in them, you should probably take it up with the cryptographers who devised them. But let me know about the issues. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On 12/6/07, Berend de Boer <berend@...> wrote: > >>>>> "A" == A Pagaltzis <pagaltzis@...> writes: > > A> Absolutely. I don't use cookies as session keys. The only thing I > A> use them for is to pass along auth credentials and a HMAC digest > A> that the server can verify as a) corresponding to each other and > A> b) originating from itself, both without having to store anything > A> essentially this is a propriertary HTTP Auth protocol tunnelled > A> over the cookie header. There's no hidden state stored on the > A> server at all, so no REST constraint is violated. > > But the problem is your custom solution cannot be easily integrated with > any other known tool. It's hard to automate access to your site, > everyone who wants to access your site by automated means needs to > custom built access protocols. I'm doing the same thing in combination with HTTP Basic, just let the client pick their preferred authentication scheme. Which one you choose has no bearing on how resources are managed, so no REST constraints violated. > I don't see any value in that, except stubbornness or unawareness of the > people who develop those arcane authentication protocols. And you can > bet it's full of security holes. There's a long thread on the rails-core mailing list and a few blog posts discussing HMAC-digested client-store cookies. There are some security issues raised specifically about the Rails implementation, but no holes pointed at the overall scheme. Assaf > > -- > Cheers, > > Berend de Boer > >
Berend de Boer <berend@...> writes: >>>>>> "A" == A Pagaltzis <pagaltzis@...> writes: > > A> Absolutely. I don’t use cookies as session keys. The only thing I > A> use them for is to pass along auth credentials and a HMAC digest > A> that the server can verify as a) corresponding to each other and > A> b) originating from itself, both without having to store anything > A> – essentially this is a propriertary HTTP Auth protocol tunnelled > A> over the cookie header. There’s no hidden state stored on the > A> server at all, so no REST constraint is violated. > > But the problem is your custom solution cannot be easily integrated with > any other known tool. It's hard to automate access to your site, > everyone who wants to access your site by automated means needs to > custom built access protocols. > > I don't see any value in that, except stubbornness or unawareness of the > people who develop those arcane authentication protocols. And you can > bet it's full of security holes. Bah. Cookie authentication like this is useful. It's useful because you can develop your own secure-ish RESTfull authentication system that will be approved by the product design people. Doesn't mean you can't support basic/digest auth for the ordinary web client though. -- Nic Ferrier http://www.woome.com - Enjoy the minute!
> There's a thread going now ("are cookies EVER restful") that seems
> to be
> settling on HTTP auth as a good alternative to cookies for
> remembering the
> logged in user. Some folks
> (http://www.peej.co.uk/articles/http-auth-with-html-forms.html) have
> formulated pretty good ideas for making it work. I also saw a blog
> entry (cannot find it just now) that has a nice method for logoff:
> have
> ONE route in your application accept "logoff" as the user and make
> sure
> all the OTHER pages reject (401) "logoff" as a user. Logoff is as
> simple
> as firing an XHR to that logoff page with "logoff" as the user and
> then
> redirecting to another page. Seems pretty straightforward to me
> (and I'll
> be exploring this, but I think it'll work across all browsers that
> support
> XMLHTTPRequest (XHR).
Interesting technique but it may not pass muster with security folks.
First of all, sites want to provide a more secure way of presenting UI
for entering credentials (see, e.g., the Yahoo login page), and
transporting those to the server typically over SSL. HTTP/1.1 does not
help with the presentation part, while XHR does not help you send
credentials over HTTPS unless the host page itself is downloaded over
HTTPS due to the same-origin policy.
Subbu
IMO, personalization changes the answer.
For instance, take Amazon's product pages. Each product has a URI that
uniquely identifies it. But each URI has a zillion representations
based on who is asking for the resource. Each user sees the same
product information except that it has recommendations and other links
specific to the user. When I bookmark the page to del.ico.us, or email
it to another user, the URI still resolves to the same product, but
with a personalized representation. To me, this is completely RESTful
and does not require user-specific URLs.
Subbu
On Dec 1, 2007, at 1:20 PM, Elliotte Rusty Harold wrote:
> pkeane wrote:
>
>> Does it follow that personalization can BEST be achieved by having a
>> cookie that contains the user's id sit on the browser and used to
>> construct URLs for XMLHTTPRequests (e.g.,
>> http://example.com/userdata/{user-id}) that will return data to be
>> inserted into the page?
>>
>> Note that I do not want the user-id to be included in the url for
>> the page
>> itself (e.g. http://example.com/home). I am assuming that the login
>> process, which can use HTTP Auth, will give the server the
>> opportunity to
>> set the cookie at the start of the login 'session'.
>>
>> thoughts?
>>
>
> You're breaking REST then. One fundamental principle is that the URI
> identifies the resource, nothing else. Addressing and authentication
> are
> two separate concerns, and you're mixing them up. Personalized
> resources
> require personalized URLs.
>
> The personalized URLs don't actually have to contain the user name if
> that bothers you for some reason. However they do have to be unique to
> the user for whom the data is personalized.
>
> It gets a little tricky when, as you describe here, one page contains
> resources accumulated from multiple URLs, but full REST requires that
> the page itself still have a unique, identifiable URL. The more you
> move
> away from this the less well the Web will work for you.
>
> --
> Elliotte Rusty Harold elharo@...
> Java I/O 2nd Edition Just Published!
> http://www.cafeaulait.org/books/javaio2/
> http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
>
>
>
> Yahoo! Groups Links
>
>
>
On Mon, Dec 10, 2007 at 01:57:17PM -0800, Subbu Allamaraju wrote: > IMO, personalization changes the answer. > > For instance, take Amazon's product pages. Each product has a URI that > uniquely identifies it. But each URI has a zillion representations > based on who is asking for the resource. Each user sees the same > product information except that it has recommendations and other links > specific to the user. When I bookmark the page to del.ico.us, or email > it to another user, the URI still resolves to the same product, but > with a personalized representation. To me, this is completely RESTful > and does not require user-specific URLs. OTOH, it really hampers cacheability. This can be mitigated with ajax (heavily cache the common stuff, serve all the personalized stuff via xhr and then you can cache each of those as a separate resource too). If you want personalization for non-javascript clients, make sure you set appropriate etags. - PW
* Paul Winkler <pw_lists@...> [2007-12-11 01:25]: > If you want personalization for non-javascript clients, make sure you > set appropriate etags. Did you mean: _Vary header_ Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I've written an introductory article about REST for InfoQ: http://www.infoq.com/articles/rest-introduction Feedback is most welcome. BTW: InfoQ is still looking for article contributions regarding REST, Web-style, RESTful Web services (or whatever your favorite moniker for "using the Web as it's supposed to be used" is). Thanks, Stefan
[ Attachment content not displayed ]
On Tue, Dec 11, 2007 at 04:51:20AM +0100, A. Pagaltzis wrote: > * Paul Winkler <pw_lists@...> [2007-12-11 01:25]: > > If you want personalization for non-javascript clients, make sure you > > set appropriate etags. > > Did you mean: _Vary header_ Err, yes, thank you for the correction. But set etags too :) -- Paul Winkler http://www.slinkp.com
[ Attachment content not displayed ]
On Dec 10, 2007, at 4:20 PM, Paul Winkler wrote: > On Mon, Dec 10, 2007 at 01:57:17PM -0800, Subbu Allamaraju wrote: >> IMO, personalization changes the answer. >> >> For instance, take Amazon's product pages. Each product has a URI >> that >> uniquely identifies it. But each URI has a zillion representations >> based on who is asking for the resource. Each user sees the same >> product information except that it has recommendations and other >> links >> specific to the user. When I bookmark the page to del.ico.us, or >> email >> it to another user, the URI still resolves to the same product, but >> with a personalized representation. To me, this is completely RESTful >> and does not require user-specific URLs. > > OTOH, it really hampers cacheability. This can be mitigated with ajax > (heavily cache the common stuff, serve all the personalized stuff via > xhr and then you can cache each of those as a separate resource too). > If you want personalization for non-javascript clients, make sure you > set appropriate etags. > Does it? Personalization is an important aspect of the web today, and companies have been building personalized sites successfully within the purview of HTTP. Downloading personalized data over XHR does improve cacheability of the rest of the page, but then again, the personalized data is not "public" cacheable. Subbu
You mean, "Vary" on the Cookie header? Subbu On Dec 10, 2007, at 7:51 PM, A. Pagaltzis wrote: > * Paul Winkler <pw_lists@...> [2007-12-11 01:25]: >> If you want personalization for non-javascript clients, make sure you >> set appropriate etags. > > Did you mean: _Vary header_ > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> > > > > Yahoo! Groups Links > > >
Nice intro. I'll recommend it to people looking for information. An aside: One of the responses to the article raised what seems to be a standard objection to REST-over-HTTP: The "parameters too big" problem, wherein the number of parameters needed to specify a URI for a resource exceeds the practical limits of today's tools (*cough* IE). It seems to me that the standard response to this should be that if the amount of data needed for a query is that large and complicated, you might well want to represent it as a resource in its own right with its own (small) URI anyway. Certainly the example I saw recently of an image search application, which requires an image as an examplar to base a search on, seems to fall into this category. Is this a reasonable answer? Stefan Tilkov wrote: > I've written an introductory article about REST for InfoQ: > > http://www.infoq.com/articles/rest-introduction > > Feedback is most welcome. > > BTW: InfoQ is still looking for article contributions regarding REST, > Web-style, RESTful Web services (or whatever your favorite moniker for > "using the Web as it's supposed to be used" is). > > Thanks, > Stefan > > > > > Yahoo! Groups Links > > > >
On Dec 12, 2007 1:26 PM, John Panzer <jpanzer@...> wrote: > It seems to me that the standard response to this should be that if the > amount of data needed for a query is that large and complicated, you > might well want to represent it as a resource in its own right with its > own (small) URI anyway. Certainly the example I saw recently of an > image search application, which requires an image as an examplar to base > a search on, seems to fall into this category. Is this a reasonable answer? Indeed, my answer to most of these things is that if you hit some limit like that within REST, you're doing something wrong. It may be that there *are* certain limits to things, but they are good hints ; if it looks wrong, or can't be done the way you want it, or it's a bit kludgy, or hackish, then, well, you're doing it wrong. Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
* Subbu Allamaraju <subbu.allamaraju@...> [2007-12-12 01:20]: > You mean, "Vary" on the Cookie header? Whichever headers drive personalisation at your site. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Dec 12, 2007, at 3:42 AM, Alexander Johannesen wrote:
> On Dec 12, 2007 1:26 PM, John Panzer <jpanzer@...> wrote:
> > It seems to me that the standard response to this should be that
> if the
> > amount of data needed for a query is that large and complicated, you
> > might well want to represent it as a resource in its own right
> with its
> > own (small) URI anyway. Certainly the example I saw recently of an
> > image search application, which requires an image as an examplar
> to base
> > a search on, seems to fall into this category. Is this a
> reasonable answer?
>
> Indeed, my answer to most of these things is that if you hit some
> limit like that within REST, you're doing something wrong. It may be
> that there *are* certain limits to things, but they are good hints ;
> if it looks wrong, or can't be done the way you want it, or it's a bit
> kludgy, or hackish, then, well, you're doing it wrong.
>
Nevertheless, can anyone fluent in HTTP explain whether a BODY in a
GET request is actually disallowed by the spec? I tried to figure this
out a while back and was unsure I got anything conclusive:
http://www.advogato.org/person/fxn/diary/474.html
-- fxn
Subbu Allamaraju wrote: > IMO, personalization changes the answer. > > For instance, take Amazon's product pages. Each product has a URI that > uniquely identifies it. But each URI has a zillion representations > based on who is asking for the resource. Each user sees the same > product information except that it has recommendations and other links > specific to the user. When I bookmark the page to del.ico.us, or email > it to another user, the URI still resolves to the same product, but > with a personalized representation. To me, this is completely RESTful > and does not require user-specific URLs. The resource is different, whether the abstract object denoted is the same or not. So it's not RESTful. Think about it - could it be cached? A RESTful approach to this would be to have a bookmarkable or shareable URL that denotes the target, which redirects on resolution to the personalised resource's URL. I can't see Amazon going for something like this though unless there were better support in HTTP and the deployed browser base for it - canonical URLs and silent redirects, so that the shareable/bookmarkable URL is the one that remains in the user's location bar. People are too used to just copying and pasting the URLs they see. -- Chris Burdess
On 12 Dec, 2007, at 2:47 AM, Xavier Noria wrote: > Nevertheless, can anyone fluent in HTTP explain whether a BODY in a > GET request is actually disallowed by the spec? I think the general consensus is that sending a body in a GET request is not disallowed. You may run into problems with intermediaries stripping the body, though. I would want to test this in quite a few environments before relying on it for a production system. Even with significant testing, I'd be nervous if I used it for general consumption. You might also find that client libraries (e.g. XHR) will not send a body with a GET request. That was my experience with a recent implementation of XHR in IE. Unless you feel like rolling your own HTTP library (eww), this could be problematic. ----- David Sidlinger david.sidlinger@...
Chris Burdess <dog@...> writes: > Subbu Allamaraju wrote: >> IMO, personalization changes the answer. >> >> For instance, take Amazon's product pages. Each product has a URI that >> uniquely identifies it. But each URI has a zillion representations >> based on who is asking for the resource. Each user sees the same >> product information except that it has recommendations and other links >> specific to the user. When I bookmark the page to del.ico.us, or email >> it to another user, the URI still resolves to the same product, but >> with a personalized representation. To me, this is completely RESTful >> and does not require user-specific URLs. > > The resource is different, whether the abstract object denoted is the > same or not. So it's not RESTful. Think about it - could it be cached? Please provide clarification: are you saying that something is a resource only if it can be cached? YS.
On Dec 12, 2007, at 6:43 AM, Yohanes Santoso wrote: > Chris Burdess <dog@...> writes: > >> Subbu Allamaraju wrote: >>> IMO, personalization changes the answer. >>> >>> For instance, take Amazon's product pages. Each product has a URI >>> that >>> uniquely identifies it. But each URI has a zillion representations >>> based on who is asking for the resource. Each user sees the same >>> product information except that it has recommendations and other >>> links >>> specific to the user. When I bookmark the page to del.ico.us, or >>> email >>> it to another user, the URI still resolves to the same product, but >>> with a personalized representation. To me, this is completely >>> RESTful >>> and does not require user-specific URLs. >> >> The resource is different, whether the abstract object denoted is the >> same or not. So it's not RESTful. Think about it - could it be >> cached? > > Please provide clarification: are you saying that something is a > resource only if it can be cached? Good question. I agree with you. Not every resource is cacheable, and there are lot of a resources that can only be cached in certain scopes. Subbu
On Dec 12, 2007, at 6:28 AM, Chris Burdess wrote: > A RESTful approach to this would be to have a bookmarkable or > shareable > URL that denotes the target, which redirects on resolution to the > personalised resource's URL. I can't see Amazon going for something > like > this though unless there were better support in HTTP and the deployed > browser base for it - canonical URLs and silent redirects, so that the > shareable/bookmarkable URL is the one that remains in the user's > location bar. People are too used to just copying and pasting the URLs > they see. Right. The URI in the Amazon case is still shareable and bookmarkable, but the representation that the client would receive is personalized. In that sense, it is a variant of the same resource. Subbu
> > You might also find that client libraries (e.g. XHR) will not send a > body with a GET request. That was my experience with a recent You are right. Recently there was some discussion about this on the webapi WG (http://lists.w3.org/Archives/Public/public-webapi/2007Dec/0008.html ). Subbu
Yohanes Santoso wrote: > Please provide clarification: are you saying that something is a > resource only if it can be cached? Of course not. If the referent of the resource is truly dynamic. However, what is the resource in this case? Is it the book? Or is it Jim's view of the book? Neither of those are particularly dynamic. The problem is that what you want to share is the URL of the book, and what Amazon wants to display is Jim's (or Annie's, or whoever's) view of the book. -- Chris Burdess
* Chris Burdess <dog@...> [2007-12-12 15:30]: > The resource is different, Representation. > whether the abstract object denoted is the same or not. Resource. > So it's not RESTful. Hm. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Xavier Noria <fxn@...> [2007-12-12 09:50]: > Nevertheless, can anyone fluent in HTTP explain whether a BODY > in a GET request is actually disallowed by the spec? I tried > to figure this out a while back and was unsure I got anything > conclusive: > > http://www.advogato.org/person/fxn/diary/474.html I continue being baffled at this confusion. What part of the spec is unclear? Section 4.3 states that “unless explicitly allowed for a request method, a message-body MUST NOT be sent”; section 9.3 indeed does not explictly allow a message-body in requests with the GET method. Therefore RFC 2616 forbids message bodies in GET requests. I fail to see how any other conclusion is possible. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Chris Burdess wrote: > Yohanes Santoso wrote: > >> Please provide clarification: are you saying that something is a >> resource only if it can be cached? >> > > Of course not. If the referent of the resource is truly dynamic. > However, what is the resource in this case? Is it the book? Or is it > Jim's view of the book? Neither of those are particularly dynamic. Third option: The view of the user associated with the request is dynamic. It's also cacheable for many interesting cases, at least with forced revalidation. > The > problem is that what you want to share is the URL of the book, and what > Amazon wants to display is Jim's (or Annie's, or whoever's) view of the > book. > Actually, you probably do want to send your friend to their own personalized view of the book. If the URL you have is specific to you, you will have great difficulty doing that. If the URL you have is for "the view of the current user" it's trivial, because it's the same conceptual resource. This appears to be an intermittent permathread...
Chris Burdess <dog@...> writes: > Yohanes Santoso wrote: >> Please provide clarification: are you saying that something is a >> resource only if it can be cached? > > Of course not. If the referent of the resource is truly dynamic. > However, what is the resource in this case? Is it the book? Or is it > Jim's view of the book? > > The problem is that what you want to share is the URL of the book, > and what Amazon wants to display is Jim's (or Annie's, or whoever's) > view of the book. When my browser displays a page containing detailed information about a book and a URL that looks like it could identify a book-like resource, I tend to think of the displayed URL identifies a book resource. But things may not be what it appears to be. A process can be a resource too. May be the identified resource is actually a process that outputs personalised representation, depending on the information in the request, that contains information about the book among other things. YS.
Very nice article, thanks. John
On 12/12/07, A. Pagaltzis <pagaltzis@...> wrote: > * Xavier Noria <fxn@...> [2007-12-12 09:50]: > > Nevertheless, can anyone fluent in HTTP explain whether a BODY > > in a GET request is actually disallowed by the spec? I tried > > to figure this out a while back and was unsure I got anything > > conclusive: > > > > http://www.advogato.org/person/fxn/diary/474.html > > I continue being baffled at this confusion. What part of the spec > is unclear? Section 4.3 states that "unless explicitly allowed > for a request method, a message-body MUST NOT be sent"; section > 9.3 indeed does not explictly allow a message-body in requests > with the GET method. That's very misleading, as that text does not appear in 4.3. What it *does* say is this; "A message-body MUST NOT be included in a request if the specification of the request method (section 5.1.1) does not allow sending an entity-body in requests" Since GET doesn't rule it out, it is supported. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
Mark Baker wrote: > A. Pagaltzis <pagaltzis@... <mailto:pagaltzis%40gmx.de> > wrote: >> I continue being baffled at this confusion. What part of the spec >> is unclear? Section 4.3 states that "unless explicitly allowed >> for a request method, a message-body MUST NOT be sent"; section >> 9.3 indeed does not explictly allow a message-body in requests >> with the GET method. > That's very misleading, as that text does not appear in 4.3. What it > *does* say is this; > "A message-body MUST NOT be included in a request if the specification > of the request method (section 5.1.1) does not allow sending an > entity-body in requests" > > Since GET doesn't rule it out, it is supported. I would agree if the text said "A message-body MUST NOT be included in a request if the specification of the request method (section 5.1.1) disallows sending an entity-body in requests." However, as it stands, the specification doesn't provide any allowance for a entity body, so we MUST NOT include one. - Brian
NOTE: this is still an open item for the HTTP-WG: http://www.w3.org/Protocols/HTTP/1.1/rfc2616bis/issues/#i19 As for my own work, all my current implementations do not check the Content-Length and just ignore Entity Bodies on GET. And I sure would not like to be the person who has to handle the implementation details of how intermediaries would track and maintain the Entity Bodies for cache-able GET requests [grin]. Mike A On 12/12/07, Brian Smith <brian@...> wrote: > Mark Baker wrote: > > A. Pagaltzis <pagaltzis@... <mailto:pagaltzis%40gmx.de> > wrote: > >> I continue being baffled at this confusion. What part of the spec > >> is unclear? Section 4.3 states that "unless explicitly allowed > >> for a request method, a message-body MUST NOT be sent"; section > >> 9.3 indeed does not explictly allow a message-body in requests > >> with the GET method. > > > That's very misleading, as that text does not appear in 4.3. What it > > *does* say is this; > > > "A message-body MUST NOT be included in a request if the specification > > of the request method (section 5.1.1) does not allow sending an > > entity-body in requests" > > > > Since GET doesn't rule it out, it is supported. > > I would agree if the text said "A message-body MUST NOT be included in a > request if the specification of the request method (section 5.1.1) > disallows sending an entity-body in requests." > > However, as it stands, the specification doesn't provide any allowance > for a entity body, so we MUST NOT include one. > > - Brian > > > > > Yahoo! Groups Links > > > > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
On Dec 12, 2007, at 9:04 PM, Brian Smith wrote:
> Mark Baker wrote:
> > A. Pagaltzis <pagaltzis@... <mailto:pagaltzis%40gmx.de> > wrote:
> >> I continue being baffled at this confusion. What part of the spec
> >> is unclear? Section 4.3 states that "unless explicitly allowed
> >> for a request method, a message-body MUST NOT be sent"; section
> >> 9.3 indeed does not explictly allow a message-body in requests
> >> with the GET method.
>
> > That's very misleading, as that text does not appear in 4.3. What it
> > *does* say is this;
>
> > "A message-body MUST NOT be included in a request if the
> specification
> > of the request method (section 5.1.1) does not allow sending an
> > entity-body in requests"
> >
> > Since GET doesn't rule it out, it is supported.
>
> I would agree if the text said "A message-body MUST NOT be included
> in a
> request if the specification of the request method (section 5.1.1)
> disallows sending an entity-body in requests."
>
> However, as it stands, the specification doesn't provide any allowance
> for a entity body, so we MUST NOT include one.
>
I think we neeed Propositional Calculus 101 :-).
The RFC in that section only specificies when message-bodies MUST NOT
be included in requests. GET does not satisfy those conditions,
therefore as per sections 5.1.1 and 5.9 message-bodies are *allowded*
in GET requests. Strictly speaking the rule says nothing about what
happens when nothing is said, but an RFC is not an axiomatic system so
in practice you interpret that follows, or else you can say that is
simply unespecified.
As I blogged in the URL above, I see a stopper in section 9.3 in an
indirect way:
The GET method means retrieve whatever information (in the form of
an entity)
is identified by the Request-URI.
That implies the resource has to be completely identified by the URI.
My motivation here was to try to figure out whether we could send GET
when GET is the correct verb, and overcome the practical limitations
in URL lengths. That sentence seems to rule that out.
-- fxn
* Mark Baker <distobj@...> [2007-12-12 20:55]:
> On 12/12/07, A. Pagaltzis <pagaltzis@...> wrote:
> > I continue being baffled at this confusion. What part of the
> > spec is unclear? Section 4.3 states that "unless explicitly
> > allowed for a request method, a message-body MUST NOT be
> > sent"; section 9.3 indeed does not explictly allow a
> > message-body in requests with the GET method.
>
> That's very misleading, as that text does not appear in 4.3.
Yes, it’s not a quote. It’s a rephrasing that exactly mirrors the
meaning of the text while being less grammatically unwiedly. How
does that mislead anyone?
> What it *does* say is this;
>
> "A message-body MUST NOT be included in a request if the
> specification of the request method (section 5.1.1) does not
> allow sending an entity-body in requests"
Allow me to shorten for clarity:
A message-body MUST NOT be included in a request if the […]
method […] does not allow [it].
Do you agree that the grammatical semantics of this sentence
is identical, and that I have only reduced the precision of the
referents a little?
If so, do you agree that the following insertion does not change
the meaning?
A message-body MUST NOT be included in a request if the
method does not [explicitly] allow it.
If you disagree, can you explain why the sentences differ?
> Since GET doesn't rule it out, it is supported.
That would be the case if the default was opt-out. But by my
reading it’s unambiguously opt-in. And GET doesn’t.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
* Chris Burdess <dog@...> [2007-12-12 18:15]: > However, what is the resource in this case? Is it the book? > Or is it Jim's view of the book? Neither of those are > particularly dynamic. The problem is that what you want to > share is the URL of the book, and what Amazon wants to display > is Jim's (or Annie's, or whoever's) view of the book. To which the answer (or response, if you will…) is not 30x/Location, but 200/Content-Location. If only it wasn’t broken on the browser web. (Another thing to think of: if I were to PUT to a URI, what resource would I be operating on?) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hello all, Please help me explain this stuff better, or just correct what I've written. In my blog I called [1] JJ Dubray out for claiming that REST "cannot efficiently deal with the state changes (content and lifecycle) of a resource". In later comments and further posting by JJ the real issue is: "what must be shared between a provider and consumer" Some "shared understanding" is required for a machine consumer to consume and especially trigger state changing actions (through PUT or POST) of a Resource. The example that JJ presented is a service that manages Job Applications and the various states a job app goes through are triggered by remote consumers. In my most recent post [2] I tried to explain this "shared understanding" as being in a Representation: * conforming to one or more shared schemas * be extensible * can't pre-define URIs * can't pre-constrain state transition paths (resource state transitions, not client) Hmmm, I think the essential questions are: 1) What is the smallest (most constrained) shared understanding possible? 2) In what ways is that different from WSDL/WADL? Thanks for any help!!! John [1] http://johnheintz.blogspot.com/2007/11/just-in-rest-cant-handle-state.html [2] http://johnheintz.blogspot.com/2007/12/shared-understanding-andor-evolvability.html -- John D. Heintz Principal Consultant New Aspects of Software http://newaspects.com http://johnheintz.blogspot.com Austin, TX (512) 633-1198
Yes. In other words, any HTTP request message is allowed to contain a message body, and thus must parse messages with that in mind. Server semantics for GET, however, are restricted such that a body, if any, has no semantic meaning to the request. The requirements on parsing are separate from the requirements on method semantics. So, yes, you can send a body with GET, and no, it is never useful to do so. This is part of the layered design of HTTP/1.1 that will become clear again once the spec is partitioned (work in progress). ....Roy
* Xavier Noria <fxn@...> [2007-12-12 22:55]:
> The RFC in that section only specificies when message-bodies
> MUST NOT be included in requests.
Aha, I see how people arrive at that reading of the spec. You
read it like this:
A message-body MUST NOT be included in a request if the
specification of the request method (section 5.1.1) does
NOT ALLOW sending an entity-body in requests.
What I see is:
A message-body MUST NOT be included in a request if the
specification of the request method (section 5.1.1) DOES
NOT allow sending an entity-body in requests.
English is ambiguous here: which verb should the negation be
grouped with? Let’s see if the spec provides any clue: what
do the method specifications say about request bodies?
• OPTIONS declares the use of such a body legal, but refrains
from defining any meaning for it.
• GET says nothing (we knew that), and neither does HEAD.
• POST, PUT obviously permit entity bodies; both define the
meaning of such a body, as well.
• DELETE says nothing.
• Neither do TRACE and CONNECT.
So the specifications of request methods only ever mention the
request body in order to explicitly permit it. That suggests that
explicit permission is necessary, whereas explicit prohibition is
not; which in turn suggests that prohibition is the implicit
default.
I argue that my reading of the spec is correct:
Message bodies in GET requests are forbidden by RFC 2616.
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi Roy, I wish your mail had arrived before I started composing my other reply… oh well. * Roy T. Fielding <fielding@...> [2007-12-12 23:55]: > In other words, any HTTP request message is allowed to contain > a message body, and thus must parse messages with that in mind. > Server semantics for GET, however, are restricted such that a > body, if any, has no semantic meaning to the request. Aha! Thanks for the clarification. That makes (some amount of) sense. I have to say I do find it misleading that section 4.3 formally sets up an option for method specifications to prohibit request bodies, when no method specific in RFC 2616 actually makes use of that option. Does that sentence in 4.3 serve any purpose, given your above clarification? Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
A. Pagaltzis wrote: > > The resource is different, > > Representation. > > > whether the abstract object denoted is the same or not. > > Resource. Not exactly. As I understand it, a representation is a serialisation of a resource's state at a particular instance in time. A single resource may have different representations based on the various aspects it may expose, of which (in HTTP) the server informs the client using the Vary mechanism. What we're talking about here are different resources, since there is no content negotiation as such that can occur to provide a different representation. At the same time these resources are related, since they all represent X's view of some other specific, shared object. -- Chris Burdess
[ Attachment content not displayed ]
I have a couple of common patterns in our data models and having a hard time figuring out a good way to map them to resource names. The larger picture is rich media content, I'll take videos in particular right now, but the same patterns also apply to documents and audio. So a video is uploaded, and for that video we have some base metadata such as title, description, author, etc.. Then we have metadata for the video file itself. Duration, codec attributes, dimensions, etc.. The video file is always transcoded to at least one other format, usually two, and the original is always kept. So we have a base video item and three related items that hold metadata for the specific format. The metadata for the video file itself will contain a resource name for the actual file. In the database, the base video item is stored in one table, and the file metadata itself is stored in another table, linked together with a FK. So how best to represent this structure using rest resources that provides basic crud operations? One catch is that I really need a single query that will return the base metadata plus format specific metadata for a single format, but updates should be separate queries. Ideas?
amazon URLs are user-specific. the querystring uid makes sure of that.
the issue then is that, while the URI *is* technically cache-able,
it's not a very valuable cache item (outside the amazon network) since
it will be used be very few client browsers.
mikea
On 12/10/07, Subbu Allamaraju <subbu.allamaraju@...> wrote:
> IMO, personalization changes the answer.
>
> For instance, take Amazon's product pages. Each product has a URI that
> uniquely identifies it. But each URI has a zillion representations
> based on who is asking for the resource. Each user sees the same
> product information except that it has recommendations and other links
> specific to the user. When I bookmark the page to del.ico.us, or email
> it to another user, the URI still resolves to the same product, but
> with a personalized representation. To me, this is completely RESTful
> and does not require user-specific URLs.
>
> Subbu
>
> On Dec 1, 2007, at 1:20 PM, Elliotte Rusty Harold wrote:
>
> > pkeane wrote:
> >
> >> Does it follow that personalization can BEST be achieved by having a
> >> cookie that contains the user's id sit on the browser and used to
> >> construct URLs for XMLHTTPRequests (e.g.,
> >> http://example.com/userdata/{user-id}) that will return data to be
> >> inserted into the page?
> >>
> >> Note that I do not want the user-id to be included in the url for
> >> the page
> >> itself (e.g. http://example.com/home). I am assuming that the login
> >> process, which can use HTTP Auth, will give the server the
> >> opportunity to
> >> set the cookie at the start of the login 'session'.
> >>
> >> thoughts?
> >>
> >
> > You're breaking REST then. One fundamental principle is that the URI
> > identifies the resource, nothing else. Addressing and authentication
> > are
> > two separate concerns, and you're mixing them up. Personalized
> > resources
> > require personalized URLs.
> >
> > The personalized URLs don't actually have to contain the user name if
> > that bothers you for some reason. However they do have to be unique to
> > the user for whom the data is personalized.
> >
> > It gets a little tricky when, as you describe here, one page contains
> > resources accumulated from multiple URLs, but full REST requires that
> > the page itself still have a unique, identifiable URL. The more you
> > move
> > away from this the less well the Web will work for you.
> >
> > --
> > Elliotte Rusty Harold elharo@...
> > Java I/O 2nd Edition Just Published!
> > http://www.cafeaulait.org/books/javaio2/
> > http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
> >
> >
> >
> > Yahoo! Groups Links
> >
> >
> >
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
On Dec 14, 2007 6:31 AM, ochs.chris <ochs.chris@...> wrote: > So how best to represent this structure using rest resources that > provides basic crud operations? try on AtomPub and model the resources in terms of Collection, Media Link Entry and Media Resource. > One catch is that I really need a single query that will return the > base metadata plus format specific metadata for a single format, but > updates should be separate queries. Atom 1.0 should be sufficient to represent the base metadata, you can choose a media extension (e.g. mRSS) to represent video-specific metadata. E.g. an entry may contain the base metadata, two mrss:content elements to represent video in alternate formats, one 'edit' link to update base metadata and one 'edit-media' link to update the original video file. -- Teo Hui Ming
I'll second that recommendation. Look at Atom Entry, which allows you to describe a single item in great detail. You can then collection multiple entries into an Atom Feed. At first, it might feel odd to use Atom, but the initial oddity will quickly pay for itself when you don't have to describe the format yourself in great detail. Standards are golden! Cheers, - Steve -------------- Steve G. Bjorg http://wiki.mindtouch.com http://wiki.opengarden.org On Dec 13, 2007, at 5:44 PM, Teo Hui Ming wrote: > On Dec 14, 2007 6:31 AM, ochs.chris <ochs.chris@...> wrote: > > So how best to represent this structure using rest resources that > > provides basic crud operations? > > try on AtomPub and model the resources in terms of Collection, Media > Link Entry and Media Resource. > > > One catch is that I really need a single query that will return the > > base metadata plus format specific metadata for a single format, but > > updates should be separate queries. > > Atom 1.0 should be sufficient to represent the base metadata, you can > choose a media extension (e.g. mRSS) to represent video-specific > metadata. E.g. an entry may contain the base metadata, two > mrss:content elements to represent video in alternate formats, one > 'edit' link to update base metadata and one 'edit-media' link to > update the original video file. > > -- > Teo Hui Ming > >
I am working on the details of supporting multiple media types for a resource. To this point, I have concentrated on supporting the Accept header as a way to allow clients to inform the server on what media type to use for the representation on a GET request. This all seems fine. Now I am wondering how important (or common) it is to provide multiple media type support for POST and PUT. I would assume Content-Type would be used by the client to communicate this info. I would also assume that servers could respond with Status 415 if the Content-Type was not supported for the POST or PUT. Any guidance or pointers to references on this topic are appreciated. Mike A -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
I'm strong believer in overloading by content-type. This is the direction we're heading with our wiki API as well. The interaction space is defined by URI x VERB x TYPE, which is simple to explain, yet very extensible for the future. Cheers, - Steve -------------- Steve G. Bjorg http://wiki.mindtouch.com http://wiki.opengarden.org On Dec 13, 2007, at 7:42 PM, mike amundsen wrote: > I am working on the details of supporting multiple media types for a > resource. To this point, I have concentrated on supporting the Accept > header as a way to allow clients to inform the server on what media > type to use for the representation on a GET request. This all seems > fine. > > Now I am wondering how important (or common) it is to provide multiple > media type support for POST and PUT. I would assume Content-Type would > be used by the client to communicate this info. I would also assume > that servers could respond with Status 415 if the Content-Type was not > supported for the POST or PUT. > > Any guidance or pointers to references on this topic are appreciated. > > Mike A > > -- > mca > "In a time of universal deceit, telling the truth becomes a > revolutionary act. " (George Orwell) > >
[ Attachment content not displayed ]
mike amundsen wrote: > > > I am working on the details of supporting multiple media types for a > resource. To this point, I have concentrated on supporting the Accept > header as a way to allow clients to inform the server on what media > type to use for the representation on a GET request. This all seems > fine. > > Now I am wondering how important (or common) it is to provide multiple > media type support for POST and PUT. I would assume Content-Type would > be used by the client to communicate this info. I would also assume > that servers could respond with Status 415 if the Content-Type was not > supported for the POST or PUT. > > Any guidance or pointers to references on this topic are appreciated. > > Mike A I would strongly discourage to use content-negotiation for authoring. Let the server return a Content-Location upon GET/HEAD; and use that URI for modifying the resource. BR, Julian
Julian: Maybe I am not reading your reply correctly, did you mean Content-Location or Content-Type? Again, my question is two-fold: - how common is it to support multiple media types for POST/PUT - if supported, what is the best way to tell the client to supported media types? Mike A On 12/14/07, Julian Reschke <julian.reschke@...> wrote: > > I would strongly discourage to use content-negotiation for authoring. > > Let the server return a Content-Location upon GET/HEAD; and use that URI > for modifying the resource. > > BR, Julian > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
Julian, I believe Mike was asking about supporting multiple content-types for a request, not negotiating the response content-type. - Steve -------------- Steve G. Bjorg http://wiki.mindtouch.com http://wiki.opengarden.org On Dec 14, 2007, at 2:46 AM, Julian Reschke wrote: > mike amundsen wrote: > > > > > > I am working on the details of supporting multiple media types for a > > resource. To this point, I have concentrated on supporting the > Accept > > header as a way to allow clients to inform the server on what media > > type to use for the representation on a GET request. This all seems > > fine. > > > > Now I am wondering how important (or common) it is to provide > multiple > > media type support for POST and PUT. I would assume Content-Type > would > > be used by the client to communicate this info. I would also assume > > that servers could respond with Status 415 if the Content-Type > was not > > supported for the POST or PUT. > > > > Any guidance or pointers to references on this topic are > appreciated. > > > > Mike A > > I would strongly discourage to use content-negotiation for authoring. > > Let the server return a Content-Location upon GET/HEAD; and use > that URI > for modifying the resource. > > BR, Julian > >
Mike, I I understood you correctly, then #1 is extremely common. For #2, don't have a good answer. Maybe this could be done via an OPTION request on the resource. - Steve -------------- Steve G. Bjorg http://wiki.mindtouch.com http://wiki.opengarden.org On Dec 14, 2007, at 6:18 AM, mike amundsen wrote: > Julian: > > Maybe I am not reading your reply correctly, did you mean > Content-Location or Content-Type? > > Again, my question is two-fold: > - how common is it to support multiple media types for POST/PUT > - if supported, what is the best way to tell the client to supported > media types? > > Mike A > > On 12/14/07, Julian Reschke <julian.reschke@...> wrote: > > > > I would strongly discourage to use content-negotiation for > authoring. > > > > Let the server return a Content-Location upon GET/HEAD; and use > that URI > > for modifying the resource. > > > > BR, Julian > > > > -- > mca > "In a time of universal deceit, telling the truth becomes a > revolutionary act. " (George Orwell) > >
Steve Bjorg wrote: > Julian, > > I believe Mike was asking about supporting multiple content-types for a > request, not negotiating the response content-type. > > - Steve Yes. I have assumed that the point of that for PUT and POST would be to support authoring of varying representations. Am I wrong? BR, Julian
mike amundsen wrote: > > > Julian: > > Maybe I am not reading your reply correctly, did you mean > Content-Location or Content-Type? Content-Location. > Again, my question is two-fold: > - how common is it to support multiple media types for POST/PUT I think it is uncommon, at least for PUT. > - if supported, what is the best way to tell the client to supported > media types? Upfront? I may be misunderstanding what you're trying to do. Is this about authoring (PUT) a resource that has different representations, being content-negotiated upon GET (such as varying on the language)? BR, Julian
On Dec 14, 2007, at 2:18 PM, mike amundsen wrote: > Again, my question is two-fold: > - how common is it to support multiple media types for POST/PUT I think this must be fairly common, or if not it should be. It's kind of the point right? You have a resource or some collection of resources that you want updated. The most flexible way of supporting update would be to support multiple media types on input. In the systems I've worked with (primarily wikis[1]), we'll accept wikitext, HTML, JSON packages that include wikitext or HTML along with some metadata. We've explored Atom and various (other) XML packagings too, but have not solidified those as we don't yet have the use case. On the server side we store the incoming representation (after some translation) to our canonical storage representation, wikitext, and then make it available in a variety of outgoing representations. On input we look at the Content-Type header. If we don't support a particular type, we send a 415. Ideally we'd respond with the types we do support but we didn't get around to that. For output we look at the accept header. We don't do full content- negotiation with the header: we hope the client to send a single type that they accept, not multiples from which we'll pick the one we like best. We try with that latter bit but it never comes out as happy as we'd like, so in our own client code we always just accept one thing and one thing only. Our thinking in all this is that for our style of API--which is purely a data mover, not something we expect to use as a human UI (although it works just great that way if you are happy not having a bunch of UI furniture)--we want the URIs for the resources to really be the place where things happen: for any entity its URL is the place where you GET it, you PUT it and you DELETE it, for any collection its URL is the place where you GET it and you POST to it. We don't bother providing forms output or support for CGI form content-types input. If we did want to have a resource which was the editor for other resources, then we might, but at this point we expect the client side to make its own editor. [1] Socialtext REST API docs: https://www.socialtext.net/st-rest-docs/index.cgi -- Chris Dent http://www.burningchrome.com/ or I'll remove your commit bit
Julian: Thanks for helping me clarify. I may be making a lot out of nothing here. If I want to support PUT/POST to a resource URL in more than one media type (application/atom+xml, application/rss+xml) what is the best way to communicate that to clients? Mike A On 12/14/07, Julian Reschke <julian.reschke@...> wrote: > mike amundsen wrote: > > > > > > Julian: > > > > Maybe I am not reading your reply correctly, did you mean > > Content-Location or Content-Type? > > Content-Location. > > > Again, my question is two-fold: > > - how common is it to support multiple media types for POST/PUT > > I think it is uncommon, at least for PUT. > > > - if supported, what is the best way to tell the client to supported > > media types? > > Upfront? > > I may be misunderstanding what you're trying to do. Is this about > authoring (PUT) a resource that has different representations, being > content-negotiated upon GET (such as varying on the language)? > > BR, Julian > > > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
mike amundsen wrote: > Julian: > > Thanks for helping me clarify. I may be making a lot out of nothing here. > > If I want to support PUT/POST to a resource URL in more than one media > type (application/atom+xml, application/rss+xml) what is the best way > to communicate that to clients? OK, so this is about two different formats essentially containing the same information, not two variants (such as "en" and "fr" language versions that could exist independently of each other). I don't think there's a mechanism defined in RFC2616 for doing this. A good idea may be to extend "Accept" to be able also available as a response header, and suggest that servers return it with a 415 (Unsupported Media Type) status. BR, Julian
On Dec 14, 2007, at 3:39 PM, mike amundsen wrote: > If I want to support PUT/POST to a resource URL in more than one media > type (application/atom+xml, application/rss+xml) what is the best way > to communicate that to clients? Docs. Maybe not best, but easiest. When the client learns of the existence of your service, make the information available then. When you send a 415 to your client, point them to the information in the error message in the body of the 415 response. Leave to humans what humans do best (learn), and leave to computers what computers do best (dreary, boring automatable stuff). -- Chris Dent http://www.burningchrome.com/ or I'll remove your commit bit
Chris: Thanks for the response. Your approach sounds quite similar to the one upon which I am depending. Esp. your description of how "conneg" is handled for requests. I have decided to mark resources with a "server-preferred" media type. If the client sends an Accepts header that (in any way) can be interpreted to support the "server-preferred" media type for that resource (e.g. "*/*"), then the server sends that representation. As for your pattern for Content-Type support on PUT/POST, this all makes sense, too. At this point, I am still vague on how media type support is communicated to the client. I suppose this is mostly done vai API documentation, right? I've need seen anyone talk about the various discovery models yet. And only one mention of the use of OPTIONS. Mike A On 12/14/07, Chris Dent <cdent@...> wrote: > > On Dec 14, 2007, at 2:18 PM, mike amundsen wrote: > > > Again, my question is two-fold: > > - how common is it to support multiple media types for POST/PUT > > I think this must be fairly common, or if not it should be. It's kind > of the point right? You have a resource or some collection of > resources that you want updated. The most flexible way of supporting > update would be to support multiple media types on input. > > In the systems I've worked with (primarily wikis[1]), we'll accept > wikitext, HTML, JSON packages that include wikitext or HTML along with > some metadata. We've explored Atom and various (other) XML packagings > too, but have not solidified those as we don't yet have the use case. > > On the server side we store the incoming representation (after some > translation) to our canonical storage representation, wikitext, and > then make it available in a variety of outgoing representations. > > On input we look at the Content-Type header. If we don't support a > particular type, we send a 415. Ideally we'd respond with the types we > do support but we didn't get around to that. > > For output we look at the accept header. We don't do full content- > negotiation with the header: we hope the client to send a single type > that they accept, not multiples from which we'll pick the one we like > best. We try with that latter bit but it never comes out as happy as > we'd like, so in our own client code we always just accept one thing > and one thing only. > > Our thinking in all this is that for our style of API--which is purely > a data mover, not something we expect to use as a human UI (although > it works just great that way if you are happy not having a bunch of UI > furniture)--we want the URIs for the resources to really be the place > where things happen: for any entity its URL is the place where you GET > it, you PUT it and you DELETE it, for any collection its URL is the > place where you GET it and you POST to it. We don't bother providing > forms output or support for CGI form content-types input. If we did > want to have a resource which was the editor for other resources, then > we might, but at this point we expect the client side to make its own > editor. > > > [1] Socialtext REST API docs: https://www.socialtext.net/st-rest-docs/index.cgi > -- > Chris Dent http://www.burningchrome.com/ > or I'll remove your commit bit > > > > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
Julian: Yes, we are understanding each other better now. Extending Accepts is one possibility. Chris Dent mentions adding information to the body of a 415 response, too. Mike A On 12/14/07, Julian Reschke <julian.reschke@...> wrote: > mike amundsen wrote: > > Julian: > > > > Thanks for helping me clarify. I may be making a lot out of nothing here. > > > > If I want to support PUT/POST to a resource URL in more than one media > > type (application/atom+xml, application/rss+xml) what is the best way > > to communicate that to clients? > > OK, so this is about two different formats essentially containing the > same information, not two variants (such as "en" and "fr" language > versions that could exist independently of each other). > > I don't think there's a mechanism defined in RFC2616 for doing this. > > A good idea may be to extend "Accept" to be able also available as a > response header, and suggest that servers return it with a 415 > (Unsupported Media Type) status. > > BR, Julian > > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
On 12/12/07, John D. Heintz <jheintz@...> wrote: > Hello all, > > Please help me explain this stuff better, or just correct what I've written. > > In my blog I called [1] JJ Dubray out for claiming that REST "cannot > efficiently deal with the state changes (content and lifecycle) of a > resource". > > In later comments and further posting by JJ the real issue is: > "what must be shared between a provider and consumer" So once again, like clock-work my e-mail client decided to reply but not to the list, and I ended up in a private conversation about this with John. Too long to summarize the entire thread, but here's the gist of the point I'm trying to make. We know the stuff JJD is talking about works for SOAP with various levels of success, call if 1st generation Web services if you will, now how do we move forward and apply it to REST? And I'm talking about REST with benefits, not RPC over HTTP, so obviously things will translate differently, you'll end up designing processes in a different way to take advantage of REST characteristics. But what would it look like? The key problem here is composition. The simplest use case I could come up with is this. We have a workflow and a task manager. The workflow pushes tasks to the task manager, people use the task manager to manage and perform these tasks, and the outcomes are fed back to the workflow. The task manager is a resource, so is every task created there, so you can imagine using POST to create tasks, PUT to update them, ETags for caching and conflict resolution, and all other good things. The workflow also has a resource, an outcome, that the task manager updates when the task completes, or deletes if the task is cancelled. So they're acting as P2P, and I'm picking this as a typical scenario indicative of more complex composition problems we're seeing out there, but simple enough to wrap my head around it. We have two teams. Team red is working on the workflow, which does a lot of other things, not interesting for this discussion. Team blue is working on the task manager. We're digging a tunnel from both sides planning to meet in the middle and open it up for traffic. What is the minimum shared understanding that both teams need to make it happen? If I'm using tools to help with the design, build test case scenarios, change management in future versions, what artifacts would I need? And what would make the end result compelling over WS-* and beneficial in its usage of REST? Assaf > > Some "shared understanding" is required for a machine consumer to > consume and especially trigger state changing actions (through PUT or > POST) of a Resource. The example that JJ presented is a service that > manages Job Applications and the various states a job app goes through > are triggered by remote consumers. > > In my most recent post [2] I tried to explain this "shared > understanding" as being in a Representation: > * conforming to one or more shared schemas > * be extensible > * can't pre-define URIs > * can't pre-constrain state transition paths (resource state > transitions, not client) > > Hmmm, I think the essential questions are: > 1) What is the smallest (most constrained) shared understanding possible? > 2) In what ways is that different from WSDL/WADL? > > Thanks for any help!!! > John > > [1] http://johnheintz.blogspot.com/2007/11/just-in-rest-cant-handle-state.html > [2] http://johnheintz.blogspot.com/2007/12/shared-understanding-andor-evolvability.html > > > > -- > John D. Heintz > Principal Consultant > New Aspects of Software > http://newaspects.com > http://johnheintz.blogspot.com > Austin, TX > (512) 633-1198 > > > > Yahoo! Groups Links > > > >
On Dec 14, 2007, at 7:38 AM, Chris Dent wrote: > > On Dec 14, 2007, at 2:18 PM, mike amundsen wrote: > >> Again, my question is two-fold: >> - how common is it to support multiple media types for POST/PUT > > I think this must be fairly common, or if not it should be. It's kind > of the point right? You have a resource or some collection of > resources that you want updated. The most flexible way of supporting > update would be to support multiple media types on input. I think there is a difference between offering multiple representations of a resource via GET and updating the same resource via a POST/PUT. With POST/PUT, you are creating/changing the resource itself, not just creating/changing a "representation" of the resource. Given this, there is reason to offer a symmetry between GET and POST/ PUT. That is, if a GET is capable of returning application/json, application/xml and text/html, it does not imply that a POST or PUT should support processing creation/update of the resource with the payload expressed via the same media types. You may instead decide to support application/x-www-form-urlencoded if the clientele is browsers. The approach you state below may be specific to your clientele, but I don't think that is the general pattern. Subbu > In the systems I've worked with (primarily wikis[1]), we'll accept > wikitext, HTML, JSON packages that include wikitext or HTML along with > some metadata. We've explored Atom and various (other) XML packagings > too, but have not solidified those as we don't yet have the use case. > > On the server side we store the incoming representation (after some > translation) to our canonical storage representation, wikitext, and > then make it available in a variety of outgoing representations. > > On input we look at the Content-Type header. If we don't support a > particular type, we send a 415. Ideally we'd respond with the types we > do support but we didn't get around to that. > > For output we look at the accept header. We don't do full content- > negotiation with the header: we hope the client to send a single type > that they accept, not multiples from which we'll pick the one we like > best. We try with that latter bit but it never comes out as happy as > we'd like, so in our own client code we always just accept one thing > and one thing only. > > Our thinking in all this is that for our style of API--which is purely > a data mover, not something we expect to use as a human UI (although > it works just great that way if you are happy not having a bunch of UI > furniture)--we want the URIs for the resources to really be the place > where things happen: for any entity its URL is the place where you GET > it, you PUT it and you DELETE it, for any collection its URL is the > place where you GET it and you POST to it. We don't bother providing > forms output or support for CGI form content-types input. If we did > want to have a resource which was the editor for other resources, then > we might, but at this point we expect the client side to make its own > editor. > > > [1] Socialtext REST API docs: https://www.socialtext.net/st-rest-docs/index.cgi > -- > Chris Dent http://www.burningchrome.com/ > or I'll remove your commit bit > > > > > > > Yahoo! Groups Links > > >
On 12/14/07, mike amundsen <mamund@...> wrote: > At this point, I am still vague on how media type support is > communicated to the client. I suppose this is mostly done vai API > documentation, right? I've need seen anyone talk about the various > discovery models yet. And only one mention of the use of OPTIONS. I can tell you I do that as a matter of practice, supporting multiple content-types on requests and responses, specifically for structured data. If the client is a Web browser than the input format (url-encoded or multipart) is specified by the form or XHR. If the client is automated, then it needs to know what the relevant information is and how to structure it, so that's covered by a specification which tells you which content types are applicable. I do make a conscious effort to maintain some symmetry, so for some content-type you're to POST/PUT the same document you'll GET back. That works for JSON and XML, but there's not a lot of requests for HTML input or url-encoded output. Assaf > > Mike A > > > On 12/14/07, Chris Dent <cdent@...> wrote: > > > > On Dec 14, 2007, at 2:18 PM, mike amundsen wrote: > > > > > Again, my question is two-fold: > > > - how common is it to support multiple media types for POST/PUT > > > > I think this must be fairly common, or if not it should be. It's kind > > of the point right? You have a resource or some collection of > > resources that you want updated. The most flexible way of supporting > > update would be to support multiple media types on input. > > > > In the systems I've worked with (primarily wikis[1]), we'll accept > > wikitext, HTML, JSON packages that include wikitext or HTML along with > > some metadata. We've explored Atom and various (other) XML packagings > > too, but have not solidified those as we don't yet have the use case. > > > > On the server side we store the incoming representation (after some > > translation) to our canonical storage representation, wikitext, and > > then make it available in a variety of outgoing representations. > > > > On input we look at the Content-Type header. If we don't support a > > particular type, we send a 415. Ideally we'd respond with the types we > > do support but we didn't get around to that. > > > > For output we look at the accept header. We don't do full content- > > negotiation with the header: we hope the client to send a single type > > that they accept, not multiples from which we'll pick the one we like > > best. We try with that latter bit but it never comes out as happy as > > we'd like, so in our own client code we always just accept one thing > > and one thing only. > > > > Our thinking in all this is that for our style of API--which is purely > > a data mover, not something we expect to use as a human UI (although > > it works just great that way if you are happy not having a bunch of UI > > furniture)--we want the URIs for the resources to really be the place > > where things happen: for any entity its URL is the place where you GET > > it, you PUT it and you DELETE it, for any collection its URL is the > > place where you GET it and you POST to it. We don't bother providing > > forms output or support for CGI form content-types input. If we did > > want to have a resource which was the editor for other resources, then > > we might, but at this point we expect the client side to make its own > > editor. > > > > > > [1] Socialtext REST API docs: https://www.socialtext.net/st-rest-docs/index.cgi > > -- > > Chris Dent http://www.burningchrome.com/ > > or I'll remove your commit bit > > > > > > > > > > > -- > mca > "In a time of universal deceit, telling the truth becomes a > revolutionary act. " (George Orwell) > > > > Yahoo! Groups Links > > > >
On Dec 15, 2007, at 5:27 AM, Subbu Allamaraju wrote: > On Dec 14, 2007, at 7:38 AM, Chris Dent wrote: >> On Dec 14, 2007, at 2:18 PM, mike amundsen wrote: >> >>> Again, my question is two-fold: >>> - how common is it to support multiple media types for POST/PUT >> >> I think this must be fairly common, or if not it should be. It's kind >> of the point right? You have a resource or some collection of >> resources that you want updated. The most flexible way of supporting >> update would be to support multiple media types on input. > > I think there is a difference between offering multiple > representations of a resource via GET and updating the same resource > via a POST/PUT. With POST/PUT, you are creating/changing the > resource itself, not just creating/changing a "representation" of > the resource. Given this, there is reason to offer a symmetry > between GET and POST/PUT. That is, if a GET is capable of returning > application/json, application/xml and text/html, it does not imply > that a POST or PUT should support processing creation/update of the > resource with the payload expressed via the same media types. You > may instead decide to support application/x-www-form-urlencoded if > the clientele is browsers. I don't think I understand what you're trying to say here. I didn't, or didn't mean to, suggest that there should be symmetry between media types supported for GET and media types supported for POST/PUT. I simply said that more flexibility is available in the system if POST/ PUT can accept multiple media typed representations, do a bit of transformation (e.g. html -> text), and create/change resources in a canonical storage form. Of course a canonical storage form is not strictly necessary, there could be no transformation, or the transformation could happen at the time of a GET. The resource being an abstract platonic thingie, useable multiple forms. You can imagine, perhaps, a service that accepts (PUT/POST) documents in a bunch of formats (RTF, Text, HTML, Markdown, textbox from a from) but only supplies (GET) documents as HTML. Or the other way round. Or both. Symmetry of media types isn't the goal here, flexibly solving a problem is. The symmetry I did and do support is that the PUT and GET URI for a resource should be the same (otherwise, seems like it is not really a URI?). > The approach you state below may be specific to your clientele, but > I don't think that is the general pattern. Well, all I can tell you is that it seems to work really well for a system where documents and their "metadata" are the primary resources. If it works well for me, it might be useful for others, so I shared. Other people are working with more process oriented systems; in those situations other solutions may work better. -- Chris Dent http://www.burningchrome.com/ or I'll remove your commit bit
I agree with Chris. Symmetry is irrelevant. For some scenarios it makes sense for others it doesn't. POST/PUT can accept various kinds of media-types and GET can provide various kinds of media-types. How these correlate to each other is resource specific and therefore irrelevant to RESTful design. - Steve -------------- Steve G. Bjorg http://wiki.mindtouch.com http://wiki.opengarden.org On Dec 15, 2007, at 3:33 AM, Chris Dent wrote: > > On Dec 15, 2007, at 5:27 AM, Subbu Allamaraju wrote: > > On Dec 14, 2007, at 7:38 AM, Chris Dent wrote: > >> On Dec 14, 2007, at 2:18 PM, mike amundsen wrote: > >> > >>> Again, my question is two-fold: > >>> - how common is it to support multiple media types for POST/PUT > >> > >> I think this must be fairly common, or if not it should be. It's > kind > >> of the point right? You have a resource or some collection of > >> resources that you want updated. The most flexible way of > supporting > >> update would be to support multiple media types on input. > > > > I think there is a difference between offering multiple > > representations of a resource via GET and updating the same resource > > via a POST/PUT. With POST/PUT, you are creating/changing the > > resource itself, not just creating/changing a "representation" of > > the resource. Given this, there is reason to offer a symmetry > > between GET and POST/PUT. That is, if a GET is capable of returning > > application/json, application/xml and text/html, it does not imply > > that a POST or PUT should support processing creation/update of the > > resource with the payload expressed via the same media types. You > > may instead decide to support application/x-www-form-urlencoded if > > the clientele is browsers. > > I don't think I understand what you're trying to say here. I didn't, > or didn't mean to, suggest that there should be symmetry between media > types supported for GET and media types supported for POST/PUT. I > simply said that more flexibility is available in the system if POST/ > PUT can accept multiple media typed representations, do a bit of > transformation (e.g. html -> text), and create/change resources in a > canonical storage form. > > Of course a canonical storage form is not strictly necessary, there > could be no transformation, or the transformation could happen at the > time of a GET. The resource being an abstract platonic thingie, > useable multiple forms. > > You can imagine, perhaps, a service that accepts (PUT/POST) documents > in a bunch of formats (RTF, Text, HTML, Markdown, textbox from a from) > but only supplies (GET) documents as HTML. Or the other way round. Or > both. > > Symmetry of media types isn't the goal here, flexibly solving a > problem is. > > The symmetry I did and do support is that the PUT and GET URI for a > resource should be the same (otherwise, seems like it is not really a > URI?). > > > The approach you state below may be specific to your clientele, but > > I don't think that is the general pattern. > > Well, all I can tell you is that it seems to work really well for a > system where documents and their "metadata" are the primary resources. > If it works well for me, it might be useful for others, so I shared. > Other people are working with more process oriented systems; in those > situations other solutions may work better. > > -- > Chris Dent http://www.burningchrome.com/ > or I'll remove your commit bit > > >
>> I think there is a difference between offering multiple >> representations of a resource via GET and updating the same >> resource via a POST/PUT. With POST/PUT, you are creating/changing >> the resource itself, not just creating/changing a "representation" >> of the resource. Given this, there is reason to offer a symmetry >> between GET and POST/PUT. That is, if a GET is capable of returning >> application/json, application/xml and text/html, it does not imply >> that a POST or PUT should support processing creation/update of the >> resource with the payload expressed via the same media types. You >> may instead decide to support application/x-www-form-urlencoded if >> the clientele is browsers. > > I don't think I understand what you're trying to say here. I didn't, > or didn't mean to, suggest that there should be symmetry between > media types supported for GET and media types supported for POST/ > PUT. I simply said that more flexibility is available in the system > if POST/PUT can accept multiple media typed representations, do a > bit of transformation (e.g. html -> text), and create/change > resources in a canonical storage form. Looks like we are agreeing to the same. For flexibility sake, yes, you may offer several encoding formats. Subbu
Is somebody from Amazon on this list? Please tell me you're not using GET requests like it describes here: http://docs.amazonwebservices.com/AmazonSimpleDB/2007-11-07/DeveloperGuide/MakingRESTRequests.html Are you calling this REST because it's sexy? :DG<
[ Attachment content not displayed ]
> On 12/15/07, Dimitri Glazkov <dimitri.glazkov@...> wrote: > > Is somebody from Amazon on this list? Please tell me you're not using > > GET requests like it describes here: Amazon does the same thing with their ecommerce api, which this list has criticized long ago. Not that it did any good...I just checked, the latest version is the same way. http://docs.amazonwebservices.com/AWSECommerceService/2007-10-29/DG/ http://ecs.amazonaws.com/onca/xml? Service=AWSECommerceService& AWSAccessKeyId=[AWS Access Key ID]& AssociateId=[ID]& Operation=CartCreate& Item.1.OfferListingId=B000062TU1& Item.1.Quantity=2& MergeCart=True They even use GET with forms that add items to shopping carts: <form method="GET" action="http://www.amazon.com/gp/aws/cart/add.html"> <input type="hidden" name="AWSAccessKeyId" value="Access Key ID" /><br/> <input type="hidden" name="AssociateTag" value="Associate Tag" /><br/> <p>One Product<br/> ASIN:<input type="text" name="ASIN.1"/><br/> OfferListingId:<input type="text" name="OfferListingId.1"/><br/> Quantity:<input type="text" name="Quantity.1"/><br/> ExchangeId:<input type="text" name="ExchangeId.1"/><br/> SellerId:<input type="text" name="SellerId.1"/><br/> <p>Another Product<br/> ASIN:<input type="text" name="ASIN.2"/><br/> OfferListingId:<input type="text" name="OfferListingId.2"/><br/> Quantity:<input type="text" name="Quantity.2"/><br/> ExchangeId:<input type="text" name="ExchangeId.2"/><br/> SellerId:<input type="text" name="SellerId.2"/><br/> </p> <input type="submit" name="add" value="add" /> </form>
Hi, As I just wrote about DBPedia in "James Gosling has a foaf name" [1] I thought this would be a good time to point people to that article which shows very clearly how beautifully complimentary RDF and REST are. Henry [1] http://blogs.sun.com/bblfish/entry/james_gosling_has_a_foaf
This is a variant of what I call as SOAPy REST (http://subbu.org/weblogs/main/2007/10/soapy_rest.html ). Whoever wrote this API had no idea of why they were providing a resource centric interface. Yet another HTTP API! Subbu On Dec 15, 2007, at 14:37, "Dimitri Glazkov" <dimitri.glazkov@...> wrote: > Is somebody from Amazon on this list? Please tell me you're not using > GET requests like it describes here: > > http://docs.amazonwebservices.com/AmazonSimpleDB/2007-11-07/DeveloperGuide/MakingRESTRequests.html > > Are you calling this REST because it's sexy? > > :DG< > > > > Yahoo! Groups Links > > >
Subbu Allamaraju wrote: > > > This is a variant of what I call as SOAPy REST (http://subbu. > org/weblogs/ main/2007/ 10/soapy_ rest.html > <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> > ). Whoever wrote this API had no idea of why they were providing a > resource centric interface. Yet another HTTP API! No, it's even much Much MUCH worse -- it uses GET for non-retrieval actions. BR, Julian
On Dec 16, 2007 5:49 PM, Julian Reschke <julian.reschke@...> wrote: > Subbu Allamaraju wrote: > > > > > > This is a variant of what I call as SOAPy REST (http://subbu. > > org/weblogs/ main/2007/ 10/soapy_ rest.html > > <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> > > ). Whoever wrote this API had no idea of why they were providing a > > resource centric interface. Yet another HTTP API! > > No, it's even much Much MUCH worse -- it uses GET for non-retrieval actions. I nominate it for the 2007 Restless awards, in the much contested category of "things that claim to be RESTful but do side effects in their GETs" along with the ever popular "SOAP endpoint in disguise" category I know this mailing list has not, historically, had such awards, but now is as good a time to start as any....
+1 I just tried to "refactor" this API (http://www.subbu.org/weblogs/main/2007/12/a_restful_versi.html ) to be resource oriented, and it is not that hard. Subbu On Dec 16, 2007, at 12:57 PM, Steve Loughran wrote: > On Dec 16, 2007 5:49 PM, Julian Reschke <julian.reschke@...> wrote: >> Subbu Allamaraju wrote: >>> >>> >>> This is a variant of what I call as SOAPy REST (http://subbu. >>> org/weblogs/ main/2007/ 10/soapy_ rest.html >>> <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> >>> ). Whoever wrote this API had no idea of why they were providing a >>> resource centric interface. Yet another HTTP API! >> >> No, it's even much Much MUCH worse -- it uses GET for non-retrieval >> actions. > > I nominate it for the 2007 Restless awards, in the much contested > category of > > "things that claim to be RESTful but do side effects in their GETs" > along with the ever popular > "SOAP endpoint in disguise" category > > I know this mailing list has not, historically, had such awards, but > now is as good a time to start as any.... > > > > Yahoo! Groups Links > > >
On 12/16/07, Subbu Allamaraju <subbu.allamaraju@...> wrote: > +1 > > I just tried to "refactor" this API (http://www.subbu.org/weblogs/main/2007/12/a_restful_versi.html > ) to be resource oriented, and it is not that hard. Subbu, here's a slightly different take on the same principle: http://blog.labnotes.org/2007/12/17/dehorrible-restifying-simpledb/ Assaf > > Subbu > > On Dec 16, 2007, at 12:57 PM, Steve Loughran wrote: > > > On Dec 16, 2007 5:49 PM, Julian Reschke <julian.reschke@...> wrote: > >> Subbu Allamaraju wrote: > >>> > >>> > >>> This is a variant of what I call as SOAPy REST (http://subbu. > >>> org/weblogs/ main/2007/ 10/soapy_ rest.html > >>> <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> > >>> ). Whoever wrote this API had no idea of why they were providing a > >>> resource centric interface. Yet another HTTP API! > >> > >> No, it's even much Much MUCH worse -- it uses GET for non-retrieval > >> actions. > > > > I nominate it for the 2007 Restless awards, in the much contested > > category of > > > > "things that claim to be RESTful but do side effects in their GETs" > > along with the ever popular > > "SOAP endpoint in disguise" category > > > > I know this mailing list has not, historically, had such awards, but > > now is as good a time to start as any.... > > > > > > > > Yahoo! Groups Links > > > > > > > > > > > Yahoo! Groups Links > > > >
That was quick :) Great. Subbu On Dec 17, 2007, at 1:59 AM, Assaf Arkin wrote: > On 12/16/07, Subbu Allamaraju <subbu.allamaraju@...> wrote: >> +1 >> >> I just tried to "refactor" this API (http://www.subbu.org/weblogs/main/2007/12/a_restful_versi.html >> ) to be resource oriented, and it is not that hard. > > Subbu, here's a slightly different take on the same principle: > http://blog.labnotes.org/2007/12/17/dehorrible-restifying-simpledb/ > > Assaf > >> >> Subbu >> >> On Dec 16, 2007, at 12:57 PM, Steve Loughran wrote: >> >>> On Dec 16, 2007 5:49 PM, Julian Reschke <julian.reschke@...> >>> wrote: >>>> Subbu Allamaraju wrote: >>>>> >>>>> >>>>> This is a variant of what I call as SOAPy REST (http://subbu. >>>>> org/weblogs/ main/2007/ 10/soapy_ rest.html >>>>> <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> >>>>> ). Whoever wrote this API had no idea of why they were providing a >>>>> resource centric interface. Yet another HTTP API! >>>> >>>> No, it's even much Much MUCH worse -- it uses GET for non-retrieval >>>> actions. >>> >>> I nominate it for the 2007 Restless awards, in the much contested >>> category of >>> >>> "things that claim to be RESTful but do side effects in their GETs" >>> along with the ever popular >>> "SOAP endpoint in disguise" category >>> >>> I know this mailing list has not, historically, had such awards, but >>> now is as good a time to start as any.... >>> >>> >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> >> >> >> Yahoo! Groups Links >> >> >> >>
On 12/17/07, Subbu Allamaraju <subbu.allamaraju@...> wrote: > That was quick :) It was surprisingly easy, couple of hours. Assaf > > Great. > > Subbu > > On Dec 17, 2007, at 1:59 AM, Assaf Arkin wrote: > > > On 12/16/07, Subbu Allamaraju <subbu.allamaraju@...> wrote: > >> +1 > >> > >> I just tried to "refactor" this API (http://www.subbu.org/weblogs/main/2007/12/a_restful_versi.html > >> ) to be resource oriented, and it is not that hard. > > > > Subbu, here's a slightly different take on the same principle: > > http://blog.labnotes.org/2007/12/17/dehorrible-restifying-simpledb/ > > > > Assaf > > > >> > >> Subbu > >> > >> On Dec 16, 2007, at 12:57 PM, Steve Loughran wrote: > >> > >>> On Dec 16, 2007 5:49 PM, Julian Reschke <julian.reschke@...> > >>> wrote: > >>>> Subbu Allamaraju wrote: > >>>>> > >>>>> > >>>>> This is a variant of what I call as SOAPy REST (http://subbu. > >>>>> org/weblogs/ main/2007/ 10/soapy_ rest.html > >>>>> <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> > >>>>> ). Whoever wrote this API had no idea of why they were providing a > >>>>> resource centric interface. Yet another HTTP API! > >>>> > >>>> No, it's even much Much MUCH worse -- it uses GET for non-retrieval > >>>> actions. > >>> > >>> I nominate it for the 2007 Restless awards, in the much contested > >>> category of > >>> > >>> "things that claim to be RESTful but do side effects in their GETs" > >>> along with the ever popular > >>> "SOAP endpoint in disguise" category > >>> > >>> I know this mailing list has not, historically, had such awards, but > >>> now is as good a time to start as any.... > >>> > >>> > >>> > >>> Yahoo! Groups Links > >>> > >>> > >>> > >> > >> > >> > >> > >> Yahoo! Groups Links > >> > >> > >> > >> > >
> I nominate it for the 2007 Restless awards, .. > > I know this mailing list has not, historically, had such awards, but > now is as good a time to start as any.... Amazon did well in: == The 2006 'What Now How' Awards for REST Protocols* == with their S3 interface. Shame they've lost their touch and gone STREST**. =0) * http://duncan-cragg.org/blog/post/2006-what-now-how-awards-rest-protocols/ ** http://duncan-cragg.org/blog/post/strest-service-trampled-rest-will-break-web-20/ _____________________________________________ Now that there's quite a momentum behind REST, perhaps we should take this opportunity to use SimpleDB as a poster child (in the true sense of 'sad-looking, symbolic instance in need of our help'). Amazon is a high profile organisation with lots of skin in the Web 2.0 game. Let's try and get them to change to good Web architecture, then hype up the media attention... _________________________________ Duncan Cragg Web Application Architect The Financial Times Group (UK) http://www.ft.com
which only proves that doing good REST is as easy as bad REST. The trouble is that the "action-oriented" mindset maps very well into SOAP and not REST. This brings up another question. Why do people want to provide both SOAP and REST interfaces? When they decide to do, do they want to use the same code base to process the requests? I can see lots of issues. Has anyone on this list got experience doing both for the same application? Subbu On Dec 17, 2007, at 10:15 AM, Assaf Arkin wrote: > On 12/17/07, Subbu Allamaraju <subbu.allamaraju@...> wrote: >> That was quick :) > > It was surprisingly easy, couple of hours. > > Assaf > >> >> Great. >> >> Subbu >> >> On Dec 17, 2007, at 1:59 AM, Assaf Arkin wrote: >> >>> On 12/16/07, Subbu Allamaraju <subbu.allamaraju@...> wrote: >>>> +1 >>>> >>>> I just tried to "refactor" this API (http://www.subbu.org/weblogs/main/2007/12/a_restful_versi.html >>>> ) to be resource oriented, and it is not that hard. >>> >>> Subbu, here's a slightly different take on the same principle: >>> http://blog.labnotes.org/2007/12/17/dehorrible-restifying-simpledb/ >>> >>> Assaf >>> >>>> >>>> Subbu >>>> >>>> On Dec 16, 2007, at 12:57 PM, Steve Loughran wrote: >>>> >>>>> On Dec 16, 2007 5:49 PM, Julian Reschke <julian.reschke@...> >>>>> wrote: >>>>>> Subbu Allamaraju wrote: >>>>>>> >>>>>>> >>>>>>> This is a variant of what I call as SOAPy REST (http://subbu. >>>>>>> org/weblogs/ main/2007/ 10/soapy_ rest.html >>>>>>> <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> >>>>>>> ). Whoever wrote this API had no idea of why they were >>>>>>> providing a >>>>>>> resource centric interface. Yet another HTTP API! >>>>>> >>>>>> No, it's even much Much MUCH worse -- it uses GET for non- >>>>>> retrieval >>>>>> actions. >>>>> >>>>> I nominate it for the 2007 Restless awards, in the much contested >>>>> category of >>>>> >>>>> "things that claim to be RESTful but do side effects in their >>>>> GETs" >>>>> along with the ever popular >>>>> "SOAP endpoint in disguise" category >>>>> >>>>> I know this mailing list has not, historically, had such awards, >>>>> but >>>>> now is as good a time to start as any.... >>>>> >>>>> >>>>> >>>>> Yahoo! Groups Links >>>>> >>>>> >>>>> >>>> >>>> >>>> >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>>> >> >>
On 12/17/07, Subbu Allamaraju <subbu.allamaraju@...> wrote: > which only proves that doing good REST is as easy as bad REST. The > trouble is that the "action-oriented" mindset maps very well into SOAP > and not REST. And the resource mindset maps poorly to SOAP :-) > This brings up another question. Why do people want to provide both > SOAP and REST interfaces? Because for any given service it's quite likely to find consumes who will appreciate one and not the other, and adding a checklist is easier than taking a stand. > When they decide to do, do they want to use > the same code base to process the requests? I can see lots of issues. > Has anyone on this list got experience doing both for the same > application? From personal experience offering both SOAP and SOAP-w/o-envelope as interchangeable protocol bindings is fairly trivial with most modern WS-* stacks. But building around the REST architecture and then trying to map it to SOAP, you end up losing features on the SOAP interface or over-complicating it. And the reverse is true if you start out from the WS-* architecture and try mapping it to REST [1]. Assaf [1] http://blog.labnotes.org/2007/12/17/web-architectures-and-http-mediocrity/ > > Subbu > > On Dec 17, 2007, at 10:15 AM, Assaf Arkin wrote: > > > On 12/17/07, Subbu Allamaraju <subbu.allamaraju@...> wrote: > >> That was quick :) > > > > It was surprisingly easy, couple of hours. > > > > Assaf > > > >> > >> Great. > >> > >> Subbu > >> > >> On Dec 17, 2007, at 1:59 AM, Assaf Arkin wrote: > >> > >>> On 12/16/07, Subbu Allamaraju <subbu.allamaraju@...> wrote: > >>>> +1 > >>>> > >>>> I just tried to "refactor" this API (http://www.subbu.org/weblogs/main/2007/12/a_restful_versi.html > >>>> ) to be resource oriented, and it is not that hard. > >>> > >>> Subbu, here's a slightly different take on the same principle: > >>> http://blog.labnotes.org/2007/12/17/dehorrible-restifying-simpledb/ > >>> > >>> Assaf > >>> > >>>> > >>>> Subbu > >>>> > >>>> On Dec 16, 2007, at 12:57 PM, Steve Loughran wrote: > >>>> > >>>>> On Dec 16, 2007 5:49 PM, Julian Reschke <julian.reschke@...> > >>>>> wrote: > >>>>>> Subbu Allamaraju wrote: > >>>>>>> > >>>>>>> > >>>>>>> This is a variant of what I call as SOAPy REST (http://subbu. > >>>>>>> org/weblogs/ main/2007/ 10/soapy_ rest.html > >>>>>>> <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> > >>>>>>> ). Whoever wrote this API had no idea of why they were > >>>>>>> providing a > >>>>>>> resource centric interface. Yet another HTTP API! > >>>>>> > >>>>>> No, it's even much Much MUCH worse -- it uses GET for non- > >>>>>> retrieval > >>>>>> actions. > >>>>> > >>>>> I nominate it for the 2007 Restless awards, in the much contested > >>>>> category of > >>>>> > >>>>> "things that claim to be RESTful but do side effects in their > >>>>> GETs" > >>>>> along with the ever popular > >>>>> "SOAP endpoint in disguise" category > >>>>> > >>>>> I know this mailing list has not, historically, had such awards, > >>>>> but > >>>>> now is as good a time to start as any.... > >>>>> > >>>>> > >>>>> > >>>>> Yahoo! Groups Links > >>>>> > >>>>> > >>>>> > >>>> > >>>> > >>>> > >>>> > >>>> Yahoo! Groups Links > >>>> > >>>> > >>>> > >>>> > >> > >> > >
On Dec 17, 2007, at 6:36 PM, Subbu Allamaraju wrote: > This brings up another question. Why do people want to provide both > SOAP and REST interfaces? When they decide to do, do they want to use > the same code base to process the requests? I can see lots of issues. > Has anyone on this list got experience doing both for the same > application? Yes. The same application I was talking about a couple days ago. There's also a (very limited) SOAP API. It went down something like this: * bizdev: hey, we need to do some interaction with Sharepoint, webparts, C# etc * devs: let's make an HTTP/REST API, everything can talk to the web, it'll be awesome! * bizdev: let's not alienate the enterprise people, do SOAP * devs: awwwww, okay <time passes> here you go, hope that helps. * bizdev: cool, thanks guys * devs: I CAN HAZ REST API? * bizdev: oh sure, you've been good * devs: squeeeeeee! <time passes> whoa, man, that was awesome. * other devs: wow, this is so much more awesome than the SOAP API, it's like useful and stuff * bizdev: what you guys so excited about, I don't get it? In this particular application the SOAP and REST APIs are dispatched through the same web server and operate through shims of code that dispatch into the same core code. The SOAP API is a straightforward collection of 4 or 5 methods, providing a very limited set of activities: there are a small number of verbs and nouns tightly coupled with one another. The REST API is straightforward collection of a bunch of nouns just out there for your messing with, with a limited set of a well known verbs. We hear this a lot: "Can the SOAP API do X?" "No, but the REST API can." This isn't just because the REST style is better at providing stuff. It's also better, because of its consistency and interity, at encouraging code to be written: "Will you add feature X to the SOAP API?" "Meh, that's hard." "Will you add feature X to the REST API?" "Sure." In the end it comes down to what people perceive the customer to want. Some customers are in environments where using SOAP or other WS-* is normal. If you want to support those people you sometimes have to work with their stuff. -- Chris Dent http://www.burningchrome.com/ or I'll remove your commit bit
On 17 Dec, 2007, at 3:19 PM, Assaf Arkin wrote: > Because for any given service it's quite likely to find consumes who > will appreciate one and not the other, and adding a checklist is > easier than taking a stand. We've seen this behavior before with competing, proprietary technology stacks and distributed-object approaches. A vendor was much more likely to get an "enterprise" sale if they supported a whole slew of acronyms. Quality tends to suffer, though, when products try to please everyone. Components and systems ended up being either mediocre all around or supporting one stack/language/protocol well and the others horribly. It appears, in this case, that Amazon started with a SOAP API and attempted to bolt on a "RESTful" (even with quotation marks, I'm being generous) one with minimal additional code. I think this is a large miscalculation on Amazon's part, because: a) Most of the people that are excited about the services they are rolling out are comfortable with REST in both philosophy and implementation. b) The only customers that would demand a SOAP interface are "enterprises" (there are those quotes again) that aren't looking to externalize their infrastructure to Amazon. I could be completely missing the mark. As a SOAP victim, I still bear the scars of former and present run-ins with WS-*. Is anyone seeing a high level of enthusiasm for Big Web Services that I'm just missing? ----- David Sidlinger
Assaf, Thanks for restating your point on list. (I've trained my fingers on GMail to always use 'a' instead of 'r', :) I second these points you make: * interface definition languages work to some degree * something else would provide "REST with benefits" and be of the web The example you are working from (2 services communicating P2P) is still too complicated for me at this stage. The two scenarios that I think are important to describe is: 1) A machine client that can automate interaction with multiple REST services that support a shared hypermedia schema 2) The same client continuing to work against a REST service that is "evovled" with new functionality. John On Dec 14, 2007 7:33 PM, Assaf Arkin <assaf@...> wrote: > On 12/12/07, John D. Heintz <jheintz@...> wrote: > > Hello all, > > > > Please help me explain this stuff better, or just correct what I've written. > > > > In my blog I called [1] JJ Dubray out for claiming that REST "cannot > > efficiently deal with the state changes (content and lifecycle) of a > > resource". > > > > In later comments and further posting by JJ the real issue is: > > "what must be shared between a provider and consumer" > > So once again, like clock-work my e-mail client decided to reply but > not to the list, and I ended up in a private conversation about this > with John. Too long to summarize the entire thread, but here's the > gist of the point I'm trying to make. > > We know the stuff JJD is talking about works for SOAP with various > levels of success, call if 1st generation Web services if you will, > now how do we move forward and apply it to REST? And I'm talking > about REST with benefits, not RPC over HTTP, so obviously things will > translate differently, you'll end up designing processes in a > different way to take advantage of REST characteristics. But what > would it look like? > > The key problem here is composition. > > The simplest use case I could come up with is this. > > We have a workflow and a task manager. The workflow pushes tasks to > the task manager, people use the task manager to manage and perform > these tasks, and the outcomes are fed back to the workflow. > > The task manager is a resource, so is every task created there, so you > can imagine using POST to create tasks, PUT to update them, ETags for > caching and conflict resolution, and all other good things. > > The workflow also has a resource, an outcome, that the task manager > updates when the task completes, or deletes if the task is cancelled. > > So they're acting as P2P, and I'm picking this as a typical scenario > indicative of more complex composition problems we're seeing out > there, but simple enough to wrap my head around it. > > We have two teams. Team red is working on the workflow, which does a > lot of other things, not interesting for this discussion. Team blue > is working on the task manager. We're digging a tunnel from both > sides planning to meet in the middle and open it up for traffic. > > What is the minimum shared understanding that both teams need to make > it happen? > > If I'm using tools to help with the design, build test case scenarios, > change management in future versions, what artifacts would I need? > > And what would make the end result compelling over WS-* and beneficial > in its usage of REST? > > Assaf > > -- John D. Heintz Principal Consultant New Aspects of Software http://newaspects.com http://johnheintz.blogspot.com Austin, TX (512) 633-1198
Josh,
You are right that is must constrain state-transition paths "to some
degree", and where I wrote "can't pre-constrain state transition path"
it doesn't make much sense.
Here is an assumption that I think is present in WADL, JJ's comments,
Assaf's comments, and your comments:
Any schema must provide the client knowledge about what verbs can be
used and when: "up front", instead of being "discovered".
Your comment to this effect is: "A machine client needs to know that
after seeing <edit href="..."> in a FooML document, it can POST to the
provided URL with a BarML document"
That is a valid way to document FooML and BarML service providers, but
it then unduly constrains them: preventing the evolution of those
servers.
You of course mention that HTML has a very generic mechanism to
support this, and I'm sure you don't misunderstand these things. I'm
trying to highlight that most of these discussions start with a basic
assumption of documenting the HTTP verbs and single transitions up
front.
My revised list of characteristics of a RESTful schema:
1) One or more hypermedia document types.
2) Discover URIs (don't hard code them)
3) Prefer discoverable transition declarations over hard-coded transitions
* embed some forms-like markup
* enables URI-templating for GET requests
* enables server extensions to messages (adding request-ID to support
idempotent POSTs)
* enables server evolution to vary the verb over time
4) Assume multi-step processing (instead of always a single request/response)
* HTTP redirects
* Move from single POST to Reliable Post (POST then PUT...)
* Support single to multi-stage interaction
* Perhaps support a muti-stage interaction with a media-type for
computer "CPU taxing" to slow down request speed
I'm beginning to imagine a RESTful schema is a mixture of the following:
* at least one extensible hyperdocument type
* a high-level state machine for each "interesting" resource type
(this could be a standard markup)
* a forms language that is embeddable within the hyperdocuments and is
present in the representations based on current state and available
transitions.
The biggest counter-example to this discovery process is efficiency.
The thesis clearly indicates that efficiency is sometimes reduced to
achieve the other properties.
It seems that a reasonable strategy could be achieved to reach both
discovery and efficiency by:
* exposing cacheable resource transition representations (a WADL
document cacheable for 1day for each transition??)
* adding optimistic version locking to the transition representations
After initial discovery a busy client service would directly POST a
single representation with a version number from the transition doc
either an embedded in the POST or as an HTTP header until either:
* the cache time ran out,
* the server failed with transition version mismatch.
The client would then simply rediscover and re-cache.
Let me know what you think,
John
On Dec 15, 2007 10:40 AM, Josh Sled <jsled@...> wrote:
> "John D. Heintz" <jheintz@...> writes:
> > In my most recent post [2] I tried to explain this "shared
> > understanding" as being in a Representation:
> > * conforming to one or more shared schemas
> > * be extensible
> > * can't pre-define URIs
> > * can't pre-constrain state transition paths (resource state
> > transitions, not client)
>
> I think it has to constrain state-transition paths, to some degree. That is,
> without intelligence, the possible state transitions (both sucessful/[23]xx
> codes and unsuccessful) need to be enumerated. A machine client needs to
> know that after seeing <edit href="..."> in a FooML document, it can POST to
> the provided URL with a BarML document, and that will either succeed or fail
> in the known ways.
>
> HTML has a very generic mechanism for this in forms. APP has a more detailed
> yet generic model of Workspace'ed Collections of Entries.
>
>
> > Hmmm, I think the essential questions are:
> > 1) What is the smallest (most constrained) shared understanding possible?
> > 2) In what ways is that different from WSDL/WADL?
>
> After reading some of your [1], I'd say that W*DL focus on describing
> particular interfaces, not the overall models and transitions of states;
> they're also apart from the hypermedia rather than within it.
>
> > [1] http://johnheintz.blogspot.com/2007/11/just-in-rest-cant-handle-state.html
> > [2] http://johnheintz.blogspot.com/2007/12/shared-understanding-andor-evolvability.html
>
> --
> ...jsled
> http://asynchronous.org/ - a=jsled; b=asynchronous.org; echo ${a}@${b}
>
--
John D. Heintz
Principal Consultant
New Aspects of Software
http://newaspects.com
http://johnheintz.blogspot.com
Austin, TX
(512) 633-1198
pkeane wrote: > Agreed, but that's not what I am talking about here. In no case is > there a "shared secret" communicated by way of a cookie. The cookie is > used ONLY to construct a new url to access another resource. Whether I > use a cookie or or some other mechanism is irrelevant. Actually, it's quite relevant. Another principle of REST is that hypertext is the engine of application state. URLs are passed in hypertext URL construction from algorithms or other non-hypertext information like cookies is non-RESTful. I'm not sure I buy that principle myself. I often find URL construction to be quite useful. However if an app violates the principle in the way you suggest here by building URLs from cookies, the application is not RESTful, for better or worse. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
* Elliotte Rusty Harold <elharo@...> [2007-12-19 12:55]: > pkeane wrote: > > Agreed, but that's not what I am talking about here. In no > > case is there a "shared secret" communicated by way of a > > cookie. The cookie is used ONLY to construct a new url to > > access another resource. Whether I use a cookie or or some > > other mechanism is irrelevant. > > Actually, it's quite relevant. Another principle of REST is > that hypertext is the engine of application state. URLs are > passed in hypertext URL construction from algorithms or other > non-hypertext information like cookies is non-RESTful. > > I'm not sure I buy that principle myself. I often find URL > construction to be quite useful. However if an app violates the > principle in the way you suggest here by building URLs from > cookies, the application is not RESTful, for better or worse. I don’t see how delivering code to the client that constructs URIs from application state is much different from delivering hypermedia with forms. The only difference is that forms are represented in a Turing-complete format. And Code on Demand is part of ReST. The goal of ReST is to decouple clients and servers. Constructing URIs is bad insofar as building knowledge of the URI space into the client couples the client to the server. But when the URI construction takes place via code that the server publishes, then the server can change its URI space largely at will because it can change the published code to match and clients will soon follow suit – just as it can with forms. The client thus remains decoupled from the server. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
I know RESTful is good for hierarchical resources. But I meet a problem when design my (hopefully) restful web service. My website users can add books or movies to their lists, with some predefined status (e.g. "have read", "wish to watch", etc.) and comments. How can I let web service users to get information like "get all the books Bob wish to read, with his comments"? I think the resources in this case are not books, because comments are not attributes of books. They are attributes of the "relationship of Bob and the book". Let me call the relationship "collection". So I designed the following URLs and lost: /users/Bob/collections?cat=book&status=wish /users/Bob/collections/book/wish /users/Bob/collections/cat/book/status/wish /collections/user/Bob/cat/book/status/wish /collections?user=Bob&cat=book&status=wish Which one do you consider the most "RESTful"? I am completely lost. And I can not even judge to use "users" or "user" in the URLs. Anyone give me a hint? Thanks!
Here is another nominee for your 2007 Restless awards. Plaxo.com have their REST reference guide here [1]. I included a few samples, just to whet your whistle. To GET a list of folders you POST this body: package=['Header', 'ProtoVer', '1', 'ClientID', 'PLXI:01000000000528523545360519113885', 'Client', 'PlaxoThunderBird/0.9', 'OS', 'windows/service pack infinity', 'Platform', 'Outlook/2005', 'Identifier', 'youraccount@...', 'Password', 'testpassword', 'AuthMethod', 'Plaxo'] ['/Header'] ['Get', 'Type', 'folder', 'Target', 'folders'] ['/Get'] to this URL https://testapi.plaxo.com/rest or you can put the whole thing in the URL i.e. https://testapi.plaxo.com/rest?package=['Header', 'ProtoVer', '1', 'ClientID', 'PLXI: 01000000000123456789', 'Identifier', 'youraccount@...', 'AuthMethod', 'Plaxo', 'Password', 'yourpassword', 'Client', 'PlaxoThunderBird/0.9', 'OS', 'windows/service pack infinity', 'Platform', 'Outlook/2005']%0a['/Header'] This should win in the special category of "Even more abusive than SOAP". Enjoy, Darrel [1] http://www.plaxo.com/css/api/Plaxo%20REST%20Binding%201.0.pdf On Dec 16, 2007 3:57 PM, Steve Loughran <steve.loughran.soapbuilders@...> wrote: > > > > > On Dec 16, 2007 5:49 PM, Julian Reschke <julian.reschke@...> wrote: > > Subbu Allamaraju wrote: > > > > > > > > > This is a variant of what I call as SOAPy REST (http://subbu. > > > org/weblogs/ main/2007/ 10/soapy_ rest.html > > > <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> > > > ). Whoever wrote this API had no idea of why they were providing a > > > resource centric interface. Yet another HTTP API! > > > > No, it's even much Much MUCH worse -- it uses GET for non-retrieval > actions. > > I nominate it for the 2007 Restless awards, in the much contested category > of > > "things that claim to be RESTful but do side effects in their GETs" > along with the ever popular > "SOAP endpoint in disguise" category > > I know this mailing list has not, historically, had such awards, but > now is as good a time to start as any.... > > >
On Wed, Dec 19, 2007 at 04:10:40PM -0000, qiangninghong wrote: > I know RESTful is good for hierarchical resources. But I meet a > problem when design my (hopefully) restful web service. > > My website users can add books or movies to their lists, with some > predefined status (e.g. "have read", "wish to watch", etc.) and > comments. How can I let web service users to get information like > "get all the books Bob wish to read, with his comments"? > > I think the resources in this case are not books, because comments are > not attributes of books. They are attributes of the "relationship of > Bob and the book". Let me call the relationship "collection". So I > designed the following URLs and lost: > > /users/Bob/collections?cat=book&status=wish > /users/Bob/collections/book/wish > /users/Bob/collections/cat/book/status/wish > /collections/user/Bob/cat/book/status/wish > /collections?user=Bob&cat=book&status=wish > > Which one do you consider the most "RESTful"? URI design is very nice for human users. And it's a good opportunity to think about resource design. But it's good to remember that REST has no constraints on what your URIs should look like. It's just an identifier. That said, personally I'd go with either your first or your last one, since the resource you're naming isn't really part of a hierarchy, so slashes would be misleading to human eyes. And it would be trivial to provide an html form that constructs such a URI, so you have HATEOAS. The O'Reilly book has a nice section on URI design. -- Paul Winkler
i always assume that users will 'hack' about with the URLs. therefore, i only present 'folder' (resource) items that i know will be able to return meaningful information. to use your third and fourth examples, i'd want to make sure each of these returns something meaningful: /users/ (a collection of users?) /users/bob/ (info on bob) /users/bob/collections/ (a list of bob's collections?) /users/bob/collections/cat/ (a list of bob's collection categories?) /users/bob/collections/cat/book/ (a list of bob's items in the book category?) /users/bob/collections/cat/book/status/ (a list of bob's book category item status codes?) mike a On 12/19/07, Paul Winkler <pw_lists@...> wrote: > On Wed, Dec 19, 2007 at 04:10:40PM -0000, qiangninghong wrote: > > I know RESTful is good for hierarchical resources. But I meet a > > problem when design my (hopefully) restful web service. > > > > My website users can add books or movies to their lists, with some > > predefined status (e.g. "have read", "wish to watch", etc.) and > > comments. How can I let web service users to get information like > > "get all the books Bob wish to read, with his comments"? > > > > I think the resources in this case are not books, because comments are > > not attributes of books. They are attributes of the "relationship of > > Bob and the book". Let me call the relationship "collection". So I > > designed the following URLs and lost: > > > > /users/Bob/collections?cat=book&status=wish > > /users/Bob/collections/book/wish > > /users/Bob/collections/cat/book/status/wish > > /collections/user/Bob/cat/book/status/wish > > /collections?user=Bob&cat=book&status=wish > > > > Which one do you consider the most "RESTful"? > > URI design is very nice for human users. And it's a good opportunity > to think about resource design. But it's good to remember that REST > has no constraints on what your URIs should look like. It's just an > identifier. > > That said, personally I'd go with either your first or your last one, > since the resource you're naming isn't really part of a hierarchy, so > slashes would be misleading to human eyes. And it would be trivial to > provide an html form that constructs such a URI, so you have HATEOAS. > > The O'Reilly book has a nice section on URI design. > > -- > > Paul Winkler > > > > Yahoo! Groups Links > > > > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
On 12/18/07, John D. Heintz <jheintz@...> wrote: > Assaf, > > Thanks for restating your point on list. (I've trained my fingers on > GMail to always use 'a' instead of 'r', :) > > I second these points you make: > * interface definition languages work to some degree > * something else would provide "REST with benefits" and be of the web > > The example you are working from (2 services communicating P2P) is > still too complicated for me at this stage. The two scenarios that I > think are important to describe is: > 1) A machine client that can automate interaction with multiple REST > services that support a shared hypermedia schema > 2) The same client continuing to work against a REST service that is > "evovled" with new functionality. I would phrase it slightly differently: 1. It must be subject to automation, i.e. be able to devise an algorithm that will run its course. 2. It must take the form of a spec, i.e. be able to develop and test components of the system in isolation. 3. It must minimize coupling, i.e. accommodate additions to the spec and localize changes to components. Automation. I think there's a lot of benefits we can derive from REST in that area, and fortunately the technology is catching up, that SOAP (with or without the envelope) is no longer the obvious choice. But they key is applying the REST principles to something that is fundamentally not human-driven, which may end up taking a slightly different form. I think this change is good, just a matter of where we take it from here. Specs. In my world that's mandatory, adherence to the spec is the first constraint applied to a service. I don't buy the notion that delivering self-descriptive messages is a sufficient architecture. I don't even buy that it works for HTML and Web sites, just consider how many end-users leaving sites in frustration over the UI. Even the simple act of measuring the UI againt user expectation, employee training and helpdesk support, all require a spec. Once you solve the spec problem -- and I'm not saying there's a singular solution -- a lot of possibilities open up. Test-driven development, development in parallel tracks, utilizing existing services (including SaaS), and change management. Just being able to tell which change is an implementation detail, and which one will bring the company to a halt, and acting in accordance. Coupling. The goal is to minimize coupling, making the difference between that which is necessarily coupled and that which is not. The idea that I can drive a stock trading service to behave like a conference scheduling service is absurd, but choosing one over the other is a form of coupling. It should be possible to add new features to the scheduling service without breaking existing clients, but also to realize which changes will break clients, and which clients absolutely demand the new functionality. Assaf > > John > > On Dec 14, 2007 7:33 PM, Assaf Arkin <assaf@...> wrote: > > On 12/12/07, John D. Heintz <jheintz@...> wrote: > > > Hello all, > > > > > > Please help me explain this stuff better, or just correct what I've written. > > > > > > In my blog I called [1] JJ Dubray out for claiming that REST "cannot > > > efficiently deal with the state changes (content and lifecycle) of a > > > resource". > > > > > > In later comments and further posting by JJ the real issue is: > > > "what must be shared between a provider and consumer" > > > > So once again, like clock-work my e-mail client decided to reply but > > not to the list, and I ended up in a private conversation about this > > with John. Too long to summarize the entire thread, but here's the > > gist of the point I'm trying to make. > > > > We know the stuff JJD is talking about works for SOAP with various > > levels of success, call if 1st generation Web services if you will, > > now how do we move forward and apply it to REST? And I'm talking > > about REST with benefits, not RPC over HTTP, so obviously things will > > translate differently, you'll end up designing processes in a > > different way to take advantage of REST characteristics. But what > > would it look like? > > > > The key problem here is composition. > > > > The simplest use case I could come up with is this. > > > > We have a workflow and a task manager. The workflow pushes tasks to > > the task manager, people use the task manager to manage and perform > > these tasks, and the outcomes are fed back to the workflow. > > > > The task manager is a resource, so is every task created there, so you > > can imagine using POST to create tasks, PUT to update them, ETags for > > caching and conflict resolution, and all other good things. > > > > The workflow also has a resource, an outcome, that the task manager > > updates when the task completes, or deletes if the task is cancelled. > > > > So they're acting as P2P, and I'm picking this as a typical scenario > > indicative of more complex composition problems we're seeing out > > there, but simple enough to wrap my head around it. > > > > We have two teams. Team red is working on the workflow, which does a > > lot of other things, not interesting for this discussion. Team blue > > is working on the task manager. We're digging a tunnel from both > > sides planning to meet in the middle and open it up for traffic. > > > > What is the minimum shared understanding that both teams need to make > > it happen? > > > > If I'm using tools to help with the design, build test case scenarios, > > change management in future versions, what artifacts would I need? > > > > And what would make the end result compelling over WS-* and beneficial > > in its usage of REST? > > > > Assaf > > > > > > > -- > John D. Heintz > Principal Consultant > New Aspects of Software > http://newaspects.com > http://johnheintz.blogspot.com > Austin, TX > (512) 633-1198 >
Very interesting question, and I see this coming up in most design
discussions.
The criteria I would use are the following:
a. The URI structure should clearly identify resources
b. Clients should be able to traverse from one resource to another. By
traversing, I don't mean hyper-linking. For instance, if a user Bob
has books and cds, it is easy to identify Bob's books (/bob/books)
from Joe's books (/joe/books) as via URIs that uniquely identify those
resources.
Call it esthetics, but a clean URI structure can compensate for long
paragraphs describing the resource structure.
> My website users can add books or movies to their lists, with some
> predefined status (e.g. "have read", "wish to watch", etc.) and
> comments. How can I let web service users to get information like
> "get all the books Bob wish to read, with his comments"?
>
> I think the resources in this case are not books, because comments are
> not attributes of books. They are attributes of the "relationship of
> Bob and the book". Let me call the relationship "collection". So I
If I understand the use case correctly, one goal is to represent a
user's comments on a given book. Right? If so, how about the following:
/books/{book}/comments/{user}
A request to this URI could very well point the client to a comment /
comments/{comment} (via the Content-Location header).
to identify the comments on a {book} made by a given {user}.
Alternatively,
/users/{user}/{status}/comments/{book}
to point the client to the same comment /comments/{comment}.
> designed the following URLs and lost:
>
> /users/Bob/collections?cat=book&status=wish
/users/{user}/books/{status} -> to return a collection of books
/users/{user}/books/{status}/books/{book} -> to point the client to a
book /books/{book}
> /users/Bob/collections/book/wish
> /users/Bob/collections/cat/book/status/wish
> /collections/user/Bob/cat/book/status/wish
> /collections?user=Bob&cat=book&status=wish
>
> Which one do you consider the most "RESTful"? I am completely lost.
> And I can not even judge to use "users" or "user" in the URLs. Anyone
> give me a hint? Thanks!
Both are.
Subbu
The good thing is that all these companies are trying to open up their systems with public APIs. However, the REST-branding is unfortunate. It is not just the quality of these APIs that is bothering, but the quality of infrastructure they are building behind these APIs. Since these APIs are so HTTP-unfriendly, I can't help but conclude that these APIs are being implemented poorly over the web infrastructure without taking care of On Dec 19, 2007, at 7:43 AM, Darrel Miller wrote: > Here is another nominee for your 2007 Restless awards. Plaxo.com have > their REST reference guide here [1]. I included a few samples, just > to whet your whistle. > > To GET a list of folders you POST this body: > > package=['Header', 'ProtoVer', '1', 'ClientID', > 'PLXI:01000000000528523545360519113885', 'Client', > 'PlaxoThunderBird/0.9', 'OS', 'windows/service pack infinity', > 'Platform', 'Outlook/2005', 'Identifier', > 'youraccount@...', 'Password', 'testpassword', 'AuthMethod', > 'Plaxo'] > ['/Header'] > ['Get', 'Type', 'folder', 'Target', 'folders'] > ['/Get'] > > to this URL > > https://testapi.plaxo.com/rest > > or you can put the whole thing in the URL i.e. > > https://testapi.plaxo.com/rest?package=['Header', 'ProtoVer', '1', > 'ClientID', 'PLXI: 01000000000123456789', > 'Identifier', 'youraccount@...', 'AuthMethod', 'Plaxo', > 'Password', 'yourpassword', 'Client', > 'PlaxoThunderBird/0.9', 'OS', 'windows/service pack infinity', > 'Platform', 'Outlook/2005']%0a['/Header'] > > > This should win in the special category of "Even more abusive than > SOAP". > > Enjoy, > > Darrel > > [1] http://www.plaxo.com/css/api/Plaxo%20REST%20Binding%201.0.pdf > > > On Dec 16, 2007 3:57 PM, Steve Loughran > <steve.loughran.soapbuilders@...> wrote: >> >> >> >> >> On Dec 16, 2007 5:49 PM, Julian Reschke <julian.reschke@...> >> wrote: >>> Subbu Allamaraju wrote: >>>> >>>> >>>> This is a variant of what I call as SOAPy REST (http://subbu. >>>> org/weblogs/ main/2007/ 10/soapy_ rest.html >>>> <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> >>>> ). Whoever wrote this API had no idea of why they were providing a >>>> resource centric interface. Yet another HTTP API! >>> >>> No, it's even much Much MUCH worse -- it uses GET for non-retrieval >> actions. >> >> I nominate it for the 2007 Restless awards, in the much contested >> category >> of >> >> "things that claim to be RESTful but do side effects in their GETs" >> along with the ever popular >> "SOAP endpoint in disguise" category >> >> I know this mailing list has not, historically, had such awards, but >> now is as good a time to start as any.... >> >> >> > > > > Yahoo! Groups Links > > >
On Dec 17, 2007 7:08 PM, David Sidlinger <david.sidlinger@...> wrote: > On 17 Dec, 2007, at 3:19 PM, Assaf Arkin wrote: > > > Because for any given service it's quite likely to find consumes who > > will appreciate one and not the other, and adding a checklist is > > easier than taking a stand. > > > We've seen this behavior before with competing, proprietary technology > stacks and distributed-object approaches. A vendor was much more > likely to get an "enterprise" sale if they supported a whole slew of > acronyms. Quality tends to suffer, though, when products try to > please everyone. Components and systems ended up being either > mediocre all around or supporting one stack/language/protocol well and > the others horribly. > > It appears, in this case, that Amazon started with a SOAP API and > attempted to bolt on a "RESTful" (even with quotation marks, I'm being > generous) one with minimal additional code. I think this is a large > miscalculation on Amazon's part, because: > > a) Most of the people that are excited about the services they are > rolling out are comfortable with REST in both philosophy and > implementation. > > b) The only customers that would demand a SOAP interface are > "enterprises" (there are those quotes again) that aren't looking to > externalize their infrastructure to Amazon. > I would point you at my recent slideware on infrastructure evolution: http://people.apache.org/~stevel/slides/farms_fabrics_and_clouds.pdf -the enterprises are adopting virtualisation, primarily as a way of consolidating existing web sites onto less physical hardware, with corresponding savings in hardware and energy; plus the ability to gain some reliability without massive hardware duplication. what I have no evidence of is pickup of ec2/s3 infrastructure by the enterprises. remember, these are organisations that pay for oracle databases and have full time oracle-certified DBA's prepared to argue the merits of oracle over everything else. Adopting things like postgres or mysql is still hard, let alone moving to simpledb. > I could be completely missing the mark. As a SOAP victim, I still > bear the scars of former and present run-ins with WS-*. Is anyone > seeing a high level of enthusiasm for Big Web Services that I'm just > missing? 1. this mail list is self-selecting of people who've been burned by soap stack pain too often to go near it. me, I'm happier with CORBA or RMI than WS-*, as at least things marshall well, and the Distributed Object architecture can be managed if you can roll out code updates to all nodes simultaneously. 2. WS-* is pretty deep in the enterprise, especially as the glue between "both" platforms, Windows and Java. Hence Sun's investment in better WS-* interop. In in-house, single vendor systems, WS-* can be made to work over space -but not necessarily time. Again, with a decent deployment infrastructure when you can roll out code everywhere simultaneously, then you can stay in control. The WS-* tooling has set up enterprise developers with certain expectations -you dont need to know WSDL or XSD -you dont need to work with raw XML -you can take the WSDL and have the client code generated for you There's probably also the implicit expectation that remote services look like synchronous remote method calls. As we on this list know, it is these assumptions that lead to so many problems -machine generated classes that can't handle changes in the XML, blocking rpc operations that cant handle failures, etc. But out there in the field, the stuff does work on a single version of an app, using the same toolkit at both ends, so people can easily roll out something big -because the computing world (esp. MS and IBM) say Ws-* is good. Its only later that they discover problems with -attempts to connect new clients -attempts to change the interface Failures at these point may be blamed on the new clients or their tooling, rather than fundamental flaws in the whole development methology of WS-* applications. Returning to amazon services 1. S3 is beautiful. Though I think their metadata stuff looks a bit tacked on...they should have had a special metadata/ resource for everything from the outset. If amazon released an official WADL description it may even get takeup in the tooling. Also, as S3 has a ruby clone, you can host all of S3 -bar DNS tricks- in house. Their authentication model is a bit complex, but restlets do it so I dont have to, and it lets you delegate rights to others for controlled periods of time. 2. I believe EC2 was built on axis 1.x, as that was initially the only SOAP client that could talk reliably to it using WS-security. Perhaps inside amazon they use(d) axis1.x everywhere. Interestingly, their HTTP QUERY API has more features than the SOAP one; like the ability to publish context information. the ability to provide such info was (initially) restricted to the HTTP API, and the way to retrieve this info is from a GET. 3. SimpleDB is underwhelming. I wouldn't commit to using it as my only datastore. Whoever owns your database owns you, as they say, usually in relation to DB2. The fact that you can't use mysql or postgres on EC2 is a limitation of their architecture (And the fact that the sole persistence model of EC2 is S3). I'm not convinced that simpleDB is the solution. I'd rather something that adopted GData. -steve
(completing the rant) The good thing is that all these companies are trying to open up their systems with public APIs. However, the REST-branding is unfortunate. It is not just the quality of these APIs that is bothering, but the quality of infrastructure they are building behind these APIs. Since these APIs are so HTTP-unfriendly, I can't help but conclude that these APIs are being implemented poorly over the web infrastructure without taking care of such basics as cacheability, idempotency etc. Subbu On Dec 19, 2007, at 12:53 PM, Subbu Allamaraju wrote: > The good thing is that all these companies are trying to open up > their systems with public APIs. However, the REST-branding is > unfortunate. It is not just the quality of these APIs that is > bothering, but the quality of infrastructure they are building > behind these APIs. Since these APIs are so HTTP-unfriendly, I can't > help but conclude that these APIs are being implemented poorly over > the web infrastructure without taking care of > > > On Dec 19, 2007, at 7:43 AM, Darrel Miller wrote: > >> Here is another nominee for your 2007 Restless awards. Plaxo.com >> have >> their REST reference guide here [1]. I included a few samples, just >> to whet your whistle. >> >> To GET a list of folders you POST this body: >> >> package=['Header', 'ProtoVer', '1', 'ClientID', >> 'PLXI:01000000000528523545360519113885', 'Client', >> 'PlaxoThunderBird/0.9', 'OS', 'windows/service pack infinity', >> 'Platform', 'Outlook/2005', 'Identifier', >> 'youraccount@...', 'Password', 'testpassword', 'AuthMethod', >> 'Plaxo'] >> ['/Header'] >> ['Get', 'Type', 'folder', 'Target', 'folders'] >> ['/Get'] >> >> to this URL >> >> https://testapi.plaxo.com/rest >> >> or you can put the whole thing in the URL i.e. >> >> https://testapi.plaxo.com/rest?package=['Header', 'ProtoVer', '1', >> 'ClientID', 'PLXI: 01000000000123456789', >> 'Identifier', 'youraccount@...', 'AuthMethod', 'Plaxo', >> 'Password', 'yourpassword', 'Client', >> 'PlaxoThunderBird/0.9', 'OS', 'windows/service pack infinity', >> 'Platform', 'Outlook/2005']%0a['/Header'] >> >> >> This should win in the special category of "Even more abusive than >> SOAP". >> >> Enjoy, >> >> Darrel >> >> [1] http://www.plaxo.com/css/api/Plaxo%20REST%20Binding%201.0.pdf >> >> >> On Dec 16, 2007 3:57 PM, Steve Loughran >> <steve.loughran.soapbuilders@...> wrote: >>> >>> >>> >>> >>> On Dec 16, 2007 5:49 PM, Julian Reschke <julian.reschke@...> >>> wrote: >>>> Subbu Allamaraju wrote: >>>>> >>>>> >>>>> This is a variant of what I call as SOAPy REST (http://subbu. >>>>> org/weblogs/ main/2007/ 10/soapy_ rest.html >>>>> <http://subbu.org/weblogs/main/2007/10/soapy_rest.html> >>>>> ). Whoever wrote this API had no idea of why they were providing a >>>>> resource centric interface. Yet another HTTP API! >>>> >>>> No, it's even much Much MUCH worse -- it uses GET for non-retrieval >>> actions. >>> >>> I nominate it for the 2007 Restless awards, in the much contested >>> category >>> of >>> >>> "things that claim to be RESTful but do side effects in their GETs" >>> along with the ever popular >>> "SOAP endpoint in disguise" category >>> >>> I know this mailing list has not, historically, had such awards, but >>> now is as good a time to start as any.... >>> >>> >>> >> >> >> >> Yahoo! Groups Links >> >> >> >
On Dec 19, 2007 1:29 PM, Subbu Allamaraju <subbu.allamaraju@...> wrote: > (completing the rant) > > The good thing is that all these companies are trying to open up their > systems with public APIs. However, the REST-branding is unfortunate. > It is not just the quality of these APIs that is bothering, but the > quality of infrastructure they are building behind these APIs. Since > these APIs are so HTTP-unfriendly, I can't help but conclude that > these APIs are being implemented poorly over the web infrastructure > without taking care of such basics as cacheability, idempotency etc. just as long as they set the don't cache headers on the responses, so you dont end up with stale get data.
> 3. SimpleDB is underwhelming. I wouldn't commit to using it as my only > datastore. Whoever owns your database owns you, as they say, usually > in relation to DB2. The fact that you can't use mysql or postgres on > EC2 is a limitation of their architecture (And the fact that the sole > persistence model of EC2 is S3). I'm not convinced that simpleDB is > the solution. I'd rather something that adopted GData. I don't see Amazon offering SimpleDB as a replacement for general purpose relational databases. The press loved to say that it is going to take on Oracle et al last week, but that is not the case. The key use cases I see are: - Read-most - Simpler structures without needing referential integrity - Semi-structured, i.e. items don't need to have the same set of attributes For these cases, SimpleDB may be able to offer more affordable and *scalable* storage without having to employ armies of DBAs. Of course, enterprises won't touch SimpleDB. But if I am running a startup with some imagination to scale, I might consider it. Subbu
On 12/19/07, Steve Loughran <steve.loughran.soapbuilders@...> wrote: > 1. this mail list is self-selecting of people who've been burned by > soap stack pain too often to go near it. me, I'm happier with CORBA or > RMI than WS-*, as at least things marshall well, and the Distributed > Object architecture can be managed if you can roll out code updates to > all nodes simultaneously. > > 2. WS-* is pretty deep in the enterprise, especially as the glue > between "both" platforms, Windows and Java. Hence Sun's investment in > better WS-* interop. In in-house, single vendor systems, WS-* can be > made to work over space -but not necessarily time. Again, with a > decent deployment infrastructure when you can roll out code everywhere > simultaneously, then you can stay in control. > > The WS-* tooling has set up enterprise developers with certain expectations > -you dont need to know WSDL or XSD > -you dont need to work with raw XML > -you can take the WSDL and have the client code generated for you > There's probably also the implicit expectation that remote services > look like synchronous remote method calls. > > As we on this list know, it is these assumptions that lead to so many > problems -machine generated classes that can't handle changes in the > XML, blocking rpc operations that cant handle failures, etc. But out > there in the field, the stuff does work on a single version of an app, > using the same toolkit at both ends, so people can easily roll out > something big -because the computing world (esp. MS and IBM) say Ws-* > is good. Its only later that they discover problems with > -attempts to connect new clients > -attempts to change the interface > Failures at these point may be blamed on the new clients or their > tooling, rather than fundamental flaws in the whole development > methology of WS-* applications. I remember being told eight years ago that developers, already grown accustomed to the tooling and code generation, would see little value migrating from the comforts of CORBA to the theoretically more interesting, but practically immature, SOAP. Back when SOAP was rolling your own code to handle the ever so unfriendly DOM. So taking those eight years and projecting them into the future, my guess is: 1. Tools around REST will evolve to a similar level of comfort developers are accustomed to (as they always did). 2. And will generate inflexible code that will require rolling out changes everywhere simultaneously (as they always did). 3. Which will be blamed on the tooling, not the methodology (as we always did). Why the endless cycle? Because at each generation you are reducing the complexity involved in solving existing problems, liberating you to take on more challenging problems, bringing you back to the same level of complexity. Equilibrium, it seems, is the state when you're no longer able to deliver more features at standard industry costs. CEO frustration reigns, CTOs go reading InfoWorld for new answers, hype happens and a new cycle emerges. As it always does. What really changes in each generation is the size of the solution. If you remember back in CORBA days, two machines was considered a challenging distributed networking problem. Then we moved to server farms (more like cabinets) and grids, and now we're looking towards the clouds. Assaf -- http://labnotes.org
On 12/19/07, Subbu Allamaraju <subbu.allamaraju@...> wrote: > > 3. SimpleDB is underwhelming. I wouldn't commit to using it as my only > > datastore. Whoever owns your database owns you, as they say, usually > > in relation to DB2. The fact that you can't use mysql or postgres on > > EC2 is a limitation of their architecture (And the fact that the sole > > persistence model of EC2 is S3). I'm not convinced that simpleDB is > > the solution. I'd rather something that adopted GData. > > I don't see Amazon offering SimpleDB as a replacement for general > purpose relational databases. The press loved to say that it is going > to take on Oracle et al last week, but that is not the case. > > The key use cases I see are: > > - Read-most > - Simpler structures without needing referential integrity > - Semi-structured, i.e. items don't need to have the same set of > attributes Just like the Web. > > For these cases, SimpleDB may be able to offer more affordable and > *scalable* storage without having to employ armies of DBAs. Of course, > enterprises won't touch SimpleDB. But if I am running a startup with > some imagination to scale, I might consider it. There's a whole category of databases -- I call them read consistency, as distinguished from write consistency (Oracle et al) -- SimpleDB is one, also look at CouchDB and RDDB. Essentially they move logic out of the database and into the application, so you end up trading DBAs for developers, but also mainframe-era constraints for more modern tooling. And that has the potential to be disruptive. Separately there's the notion of moving your data to the cloud, which both S3 and SimpleDB do in different ways, which seems problematic in terms of pricing and ownership, but also works nicely for SalesForce. Assaf > > Subbu
On Dec 19, 2007, at 3:52 AM, Elliotte Rusty Harold wrote: > URLs are passed in > hypertext URL construction from algorithms or other non-hypertext > information like cookies is non-RESTful. > That's not even remotely true. If anything, REST encourages the creation of URIs by construction. Forms, server-side imagemaps, isindex, and any form of code-on-demand all construct URIs through algorithms. The important bit is that the algorithm is defined by the server and the resource remains accessible regardless of how the URI was calculated (i.e., the result of the algorithm is bookmarkable). ....Roy
On Dec 19, 2007 5:25 PM, Assaf Arkin <assaf@...> wrote: > > On 12/19/07, Steve Loughran <steve.loughran.soapbuilders@...> wrote: > > > > 2. WS-* is pretty deep in the enterprise, especially as the glue > > between "both" platforms, Windows and Java. Hence Sun's investment in > > better WS-* interop. In in-house, single vendor systems, WS-* can be > > made to work over space -but not necessarily time. Again, with a > > decent deployment infrastructure when you can roll out code everywhere > > simultaneously, then you can stay in control. > > > > The WS-* tooling has set up enterprise developers with certain expectations > > -you dont need to know WSDL or XSD > > -you dont need to work with raw XML > > -you can take the WSDL and have the client code generated for you > > There's probably also the implicit expectation that remote services > > look like synchronous remote method calls. > > > > As we on this list know, it is these assumptions that lead to so many > > problems -machine generated classes that can't handle changes in the > > XML, blocking rpc operations that cant handle failures, etc. But out > > there in the field, the stuff does work on a single version of an app, > > using the same toolkit at both ends, so people can easily roll out > > something big -because the computing world (esp. MS and IBM) say Ws-* > > is good. Its only later that they discover problems with > > -attempts to connect new clients > > -attempts to change the interface > > Failures at these point may be blamed on the new clients or their > > tooling, rather than fundamental flaws in the whole development > > methology of WS-* applications. > > I remember being told eight years ago that developers, already grown > accustomed to the tooling and code generation, would see little value > migrating from the comforts of CORBA to the theoretically more > interesting, but practically immature, SOAP. Back when SOAP was > rolling your own code to handle the ever so unfriendly DOM. > > So taking those eight years and projecting them into the future, my guess is: > > 1. Tools around REST will evolve to a similar level of comfort > developers are accustomed to (as they always did). > 2. And will generate inflexible code that will require rolling out > changes everywhere simultaneously (as they always did). > 3. Which will be blamed on the tooling, not the methodology (as we always did). > > Why the endless cycle? Because at each generation you are reducing > the complexity involved in solving existing problems, liberating you > to take on more challenging problems, bringing you back to the same > level of complexity. Equilibrium, it seems, is the state when you're > no longer able to deliver more features at standard industry costs. > > CEO frustration reigns, CTOs go reading InfoWorld for new answers, > hype happens and a new cycle emerges. As it always does. > > What really changes in each generation is the size of the solution. > If you remember back in CORBA days, two machines was considered a > challenging distributed networking problem. Then we moved to server > farms (more like cabinets) and grids, and now we're looking towards > the clouds. I think that's a pretty bleak assessment. I have a different theory, which is that every language is used as the prototype for its successors. CORBA and COM were written in the C era; they evolved to become actually usable in C++ code. Their model of IDL->.h/.cpp stubs worked very well with the superbly static world view of C/C++. Java and C# adopted a more agile form of communications with RMI and .NET remoting, both of which exploit the introspection features of the languages to eliminate some of the workflow stages of COM/Corba. SOAP took some of the ideas of this RPC/distributed object world view, and tried to make it cross platform by using XML as the transport. SOAP0.9 section 5 encoding was clearly designed to marshall object graphs over the wire; SOAP1.0 and 1.1 adopted XSD to move to documents instead. But the inherent inflexibilties of the language prevent the tools being agile. If you want to move way from a DOM/XOM tree, you need to know what XML to expect -at compile time. .NET 3.0 and java7 are trying to handle XML with more agility, but they still dont like you adding new attributes/methods to existing classes. Whereas if I were to work with datastructures in a more dynamic language (scheme, prolog, javascript, etc). you can turn an arbitrary incoming text encoded datastructure straight into the type system of the platform (scheme lists, prolog clauses, javascript prototypes, ....). If we are going to have clients and servers that are less brittle than their previous generations, and yet which are still easy for people to code for, I think we need to move beyond java/C#. I'm not going to advocate any specific language, just think its time to move on, at least from the aspect of the bits of code that deal with communication with other machines. Which, as it turns out, is a large slice of modern applications. -steve
What are the normal response codes to HTTP delete operations? yet HTTP1.1 says at ( http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.7 ) "A successful response SHOULD be 200 (OK) if the response includes an entity describing the status, 202 (Accepted) if the action has not yet been enacted, or 204 (No Content) if the action has been enacted but the response does not include an entity." yet S3 returns 404, so I'm coding in some expectations for that. It also seems to me that if you are going to return an idempotent 4xx error code, then 410 "Gone" is also legit.
On Dec 19, 2007 8:01 PM, Berend de Boer <berend@...> wrote: > >>>>> "Steve" == Steve Loughran <steve.loughran.soapbuilders@...> writes: > > Steve> What are the normal response codes to HTTP delete operations? > > Either 202 or 204. > > IMO, the interpretation of idempotent for delete is that multiple > deletes lead to 202, i.e. there's no distinction between the two. The > identified resource is deleted. If you delete it once or twice, it's > still deleted and the response should be the same. I guess so,. but S3 still returns 404, or, as my unit tests point out RestletOperationException:: DELETE on https://s3.amazonaws.com/smartfrogtest : Status code 404 is out of range of 200-299, that is, delete a bucket and you get a 404 back. I guess it is a way to return an idempotent response without ever having to remember if the remote resource ever existed in the past.
Has anyone voiced their concerns and suggestions on the Amazon Web Services forums? I spoke with a few of their engineers and project leaders. They are smart guys and want to do the right thing. For now, SimpleDB isn't even yet in closed beta. I would hope that if members of this list provide constructive suggestions on how to implement their API in a more RESTful manner, they might actually listen and act upon it. - Steve -------------- Steve G. Bjorg http://wiki.mindtouch.com http://wiki.opengarden.org On Dec 19, 2007, at 1:38 PM, Steve Loughran wrote: > On Dec 19, 2007 1:29 PM, Subbu Allamaraju > <subbu.allamaraju@...> wrote: > > (completing the rant) > > > > The good thing is that all these companies are trying to open up > their > > systems with public APIs. However, the REST-branding is unfortunate. > > It is not just the quality of these APIs that is bothering, but the > > quality of infrastructure they are building behind these APIs. Since > > these APIs are so HTTP-unfriendly, I can't help but conclude that > > these APIs are being implemented poorly over the web infrastructure > > without taking care of such basics as cacheability, idempotency etc. > > just as long as they set the don't cache headers on the responses, so > you dont end up with stale get data. > >
On 12/19/07, Berend de Boer <berend@...> wrote: > >>>>> "Steve" == Steve Loughran <steve.loughran.soapbuilders@...> writes: > > Steve> What are the normal response codes to HTTP delete operations? > > Either 202 or 204. > > IMO, the interpretation of idempotent for delete is that multiple > deletes lead to 202, i.e. there's no distinction between the two. The > identified resource is deleted. If you delete it once or twice, it's > still deleted and the response should be the same. If the intent is to delete the resource, and the resource never existed, I would reason that 404 is an acceptable response. Assaf > > -- > Cheers, > > Berend de Boer > >
Steve Loughran wrote: > What are the normal response codes to HTTP delete operations? > > yet HTTP1.1 says at ( > http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.7 ) > > "A successful response SHOULD be 200 (OK) if the response includes an > entity describing the status, 202 (Accepted) if the action has not yet > been enacted, or 204 (No Content) if the action has been enacted but > the response does not include an entity." > > yet S3 returns 404, so I'm coding in some expectations for that. Right. Note the use of the word "successful" in the text you quoted. If the resource did not exist, it could not be successfully deleted so you can't really expect a 2xx response code. -- Chris Burdess
On 12/19/07, Steve Loughran <steve.loughran.soapbuilders@...> wrote: > On Dec 19, 2007 5:25 PM, Assaf Arkin <assaf@...> wrote: > > > > On 12/19/07, Steve Loughran <steve.loughran.soapbuilders@...> wrote: > > > > > > > 2. WS-* is pretty deep in the enterprise, especially as the glue > > > between "both" platforms, Windows and Java. Hence Sun's investment in > > > better WS-* interop. In in-house, single vendor systems, WS-* can be > > > made to work over space -but not necessarily time. Again, with a > > > decent deployment infrastructure when you can roll out code everywhere > > > simultaneously, then you can stay in control. > > > > > > The WS-* tooling has set up enterprise developers with certain expectations > > > -you dont need to know WSDL or XSD > > > -you dont need to work with raw XML > > > -you can take the WSDL and have the client code generated for you > > > There's probably also the implicit expectation that remote services > > > look like synchronous remote method calls. > > > > > > As we on this list know, it is these assumptions that lead to so many > > > problems -machine generated classes that can't handle changes in the > > > XML, blocking rpc operations that cant handle failures, etc. But out > > > there in the field, the stuff does work on a single version of an app, > > > using the same toolkit at both ends, so people can easily roll out > > > something big -because the computing world (esp. MS and IBM) say Ws-* > > > is good. Its only later that they discover problems with > > > -attempts to connect new clients > > > -attempts to change the interface > > > Failures at these point may be blamed on the new clients or their > > > tooling, rather than fundamental flaws in the whole development > > > methology of WS-* applications. > > > > I remember being told eight years ago that developers, already grown > > accustomed to the tooling and code generation, would see little value > > migrating from the comforts of CORBA to the theoretically more > > interesting, but practically immature, SOAP. Back when SOAP was > > rolling your own code to handle the ever so unfriendly DOM. > > > > So taking those eight years and projecting them into the future, my guess is: > > > > 1. Tools around REST will evolve to a similar level of comfort > > developers are accustomed to (as they always did). > > 2. And will generate inflexible code that will require rolling out > > changes everywhere simultaneously (as they always did). > > 3. Which will be blamed on the tooling, not the methodology (as we always did). > > > > Why the endless cycle? Because at each generation you are reducing > > the complexity involved in solving existing problems, liberating you > > to take on more challenging problems, bringing you back to the same > > level of complexity. Equilibrium, it seems, is the state when you're > > no longer able to deliver more features at standard industry costs. > > > > CEO frustration reigns, CTOs go reading InfoWorld for new answers, > > hype happens and a new cycle emerges. As it always does. > > > > What really changes in each generation is the size of the solution. > > If you remember back in CORBA days, two machines was considered a > > challenging distributed networking problem. Then we moved to server > > farms (more like cabinets) and grids, and now we're looking towards > > the clouds. > > I think that's a pretty bleak assessment. I don't take it too seriously, but if I did, I wouldn't find it all that depressing. We keep increasing capacity, we keep finding new ways to fill it up, and in doing so keep creating value for people around us. In my experience wide adoption of technologies is never about the technology but the business economics around it. > I have a different theory, > which is that every language is used as the prototype for its > successors. Metaphorically I would agree: lessons learned from one generation are applied to the next one -- hopefully we do learn from the mistakes of the past. But if we are making any predictions, I would consider the continuation from C/C++ to Java/C# anecdotal evidence and not draw any conclusions from it. For one, I think it suffers from selection bias. It's natural for people who hold the purity of languages in high regard to dismiss dBase, TurboPascal, VB, PowerBuilder et al and imagine a world where C was far more dominant. For another, it predicts that our choices moving forward are limited to young languages, like Scala and Groovy. Other languages getting the mindshare these days, like Ruby, Python, JavaScript, Erlang and Haskell are far too old to be an evolution of the existing incumbent. Likewise, progressing from SOAP would require a radical new technology, precluding REST which dates to the same time frame, describing an architecture that predates SOAP, and in fact led to the creation of SOAP. Sometimes the landscape changes to give rise to old technologies. Assaf > > CORBA and COM were written in the C era; they evolved to become > actually usable in C++ code. Their model of IDL->.h/.cpp stubs worked > very well with the superbly static world view of C/C++. Java and C# > adopted a more agile form of communications with RMI and .NET > remoting, both of which exploit the introspection features of the > languages to eliminate some of the workflow stages of COM/Corba. SOAP > took some of the ideas of this RPC/distributed object world view, and > tried to make it cross platform by using XML as the transport. SOAP0.9 > section 5 encoding was clearly designed to marshall object graphs over > the wire; SOAP1.0 and 1.1 adopted XSD to move to documents instead. > But the inherent inflexibilties of the language prevent the tools > being agile. If you want to move way from a DOM/XOM tree, you need to > know what XML to expect -at compile time. .NET 3.0 and java7 are > trying to handle XML with more agility, but they still dont like you > adding new attributes/methods to existing classes. Whereas if I were > to work with datastructures in a more dynamic language (scheme, > prolog, javascript, etc). you can turn an arbitrary incoming text > encoded datastructure straight into the type system of the platform > (scheme lists, prolog clauses, javascript prototypes, ....). > > If we are going to have clients and servers that are less brittle than > their previous generations, and yet which are still easy for people to > code for, I think we need to move beyond java/C#. I'm not going to > advocate any specific language, just think its time to move on, at > least from the aspect of the bits of code that deal with communication > with other machines. Which, as it turns out, is a large slice of > modern applications. > > -steve >
On Dec 20, 2007, at 4:37 AM, Steve Loughran wrote: > If we are going to have clients and servers that are less brittle than > their previous generations, and yet which are still easy for people to > code for, I think we need to move beyond java/C#. +1. The Web and the static world view of these languages simply don't match. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
At Fri, 21 Dec 2007 10:19:08 +1300, Berend de Boer <berend@...> wrote: > > […] > > Successful doesn't mean the delete was successful. If the system is > happy with the request, than any 202 is ok. > > If you distinguish between the cases where the url exists or not, DELETE > is no longer idempotent. Idempotence means that the side-effects of n > 0 requests are the same. It does not mean that the responses to n > 0 requests are the same. As an example, if I PUT to a resource which does not exist, a server is required to return a 201. If I put to a resource that does exist, it can’t return a 201. PUT has the property of idempotence, but two successive identical PUT request to a previously non-extant resource will return two different responses, the first a 201, the second a 200, or even a 202, while the side effects are the same. best, Erik Hetzner ;; Erik Hetzner, California Digital Library ;; gnupg key id: 1024D/01DB07E3
On 12/20/07, Berend de Boer <berend@...> wrote: > >>>>> "Chris" == Chris Burdess <dog@...> writes: > > Chris> Right. Note the use of the word "successful" in the text you > Chris> quoted. If the resource did not exist, it could not be > Chris> successfully deleted so you can't really expect a 2xx > Chris> response code. -- Chris Burdess > > Successful doesn't mean the delete was successful. If the system is > happy with the request, than any 202 is ok. > > If you distinguish between the cases where the url exists or not, DELETE > is no longer idempotent. That depends on whether you're looking at the response or the side-effect. If you only care for the side-effect -- resource no longer there -- then any status code that indicates that is acceptable. 9.1.2 says: Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request. Assaf > > -- > Cheers, > > Berend de Boer > >
There is no official dev forum yet for SimpleDB. I just posted a reference to this thread on the AWS blog (http://aws.typepad.com/aws/2007/12/a-place-for-eve.html ). Subbu On Dec 19, 2007, at 9:02 PM, Steve Bjorg wrote: > Has anyone voiced their concerns and suggestions on the Amazon Web > Services forums? I spoke with a few of their engineers and project > leaders. They are smart guys and want to do the right thing. For > now, SimpleDB isn't even yet in closed beta. I would hope that if > members of this list provide constructive suggestions on how to > implement their API in a more RESTful manner, they might actually > listen and act upon it. > > - Steve > > -------------- > Steve G. Bjorg > http://wiki.mindtouch.com > http://wiki.opengarden.org > > > On Dec 19, 2007, at 1:38 PM, Steve Loughran wrote: > >> On Dec 19, 2007 1:29 PM, Subbu Allamaraju >> <subbu.allamaraju@...> wrote: >> > (completing the rant) >> > >> > The good thing is that all these companies are trying to open up >> their >> > systems with public APIs. However, the REST-branding is >> unfortunate. >> > It is not just the quality of these APIs that is bothering, but the >> > quality of infrastructure they are building behind these APIs. >> Since >> > these APIs are so HTTP-unfriendly, I can't help but conclude that >> > these APIs are being implemented poorly over the web infrastructure >> > without taking care of such basics as cacheability, idempotency >> etc. >> >> just as long as they set the don't cache headers on the responses, so >> you dont end up with stale get data. >> > >
I have a draggable list of resources, thing a TODO list.
Each item is a resource
/items/{id}
and has a position in the collection. How would you represent that? I
think this is a particular case of changing a single attribute of a
resource.
Would
PUT /items/{id}/position/{index}
be an orthodox URL?
-- fxn
On Dec 21, 2007, at 3:56 PM, Bob Haugen wrote:
> On Dec 21, 2007 8:42 AM, Xavier Noria <fxn@...> wrote:
>> I have a draggable list of resources, thing a TODO list.
>> Each item is a resource
>> /items/{id}
>> and has a position in the collection. How would you represent that? I
>> think this is a particular case of changing a single attribute of a
>> resource.
>
> You might want to consider making the order a property of the list,
> not of the resources.
Oh sorry, the wording is bad.
The resources within a list are the ones which are draggable, the
items themselves. So an item has a position in its parent list to
represent its current position. In this case there's a single list: a
configurable list of proposal status for a CRM.
So the idea is to be able to represent more or less "move item {id} to
position {position} and let the ones after {position} adjust their
index accordingly".
-- fxn
To represent the position of an item, you have two choices:
1) let the position be a property of the item
2) let the position be a property about the item
In #1, you'll need to modify the item to change the position.
In #2, you need to modify the entity (i.e. collection) that describes
the position.
#2 feels more natural than #1, imho.
- Steve
--------------
Steve G. Bjorg
http://wiki.mindtouch.com
http://wiki.opengarden.org
On Dec 21, 2007, at 7:26 AM, Xavier Noria wrote:
> On Dec 21, 2007, at 3:56 PM, Bob Haugen wrote:
>
> > On Dec 21, 2007 8:42 AM, Xavier Noria <fxn@...> wrote:
> >> I have a draggable list of resources, thing a TODO list.
> >> Each item is a resource
> >> /items/{id}
> >> and has a position in the collection. How would you represent
> that? I
> >> think this is a particular case of changing a single attribute of a
> >> resource.
> >
> > You might want to consider making the order a property of the list,
> > not of the resources.
>
> Oh sorry, the wording is bad.
>
> The resources within a list are the ones which are draggable, the
> items themselves. So an item has a position in its parent list to
> represent its current position. In this case there's a single list: a
> configurable list of proposal status for a CRM.
>
> So the idea is to be able to represent more or less "move item {id} to
> position {position} and let the ones after {position} adjust their
> index accordingly".
>
> -- fxn
>
>
>
you might want to create a "list-order" resource that can be edited.
for example:
GET /list/{list-name}/list-order
returns a document with all the items in the list in their current order
use your favorite UI to display and manipulate this document to create
a new order for the list
PUT /list/{list-name}/list-order
sends that edited document back to the server
the server then does whatever magic is needed to commit that
information to permanent storage
mike a
On Dec 21, 2007 10:26 AM, Xavier Noria <fxn@...> wrote:
> On Dec 21, 2007, at 3:56 PM, Bob Haugen wrote:
>
> > On Dec 21, 2007 8:42 AM, Xavier Noria <fxn@...> wrote:
> >> I have a draggable list of resources, thing a TODO list.
> >> Each item is a resource
> >> /items/{id}
> >> and has a position in the collection. How would you represent that? I
> >> think this is a particular case of changing a single attribute of a
> >> resource.
> >
> > You might want to consider making the order a property of the list,
> > not of the resources.
>
> Oh sorry, the wording is bad.
>
> The resources within a list are the ones which are draggable, the
> items themselves. So an item has a position in its parent list to
> represent its current position. In this case there's a single list: a
> configurable list of proposal status for a CRM.
>
> So the idea is to be able to represent more or less "move item {id} to
> position {position} and let the ones after {position} adjust their
> index accordingly".
>
>
> -- fxn
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
Xavier Noria wrote:
>
>
> On Dec 21, 2007, at 3:56 PM, Bob Haugen wrote:
>
> > On Dec 21, 2007 8:42 AM, Xavier Noria <fxn@hashref. com
> <mailto:fxn%40hashref.com>> wrote:
> >> I have a draggable list of resources, thing a TODO list.
> >> Each item is a resource
> >> /items/{id}
> >> and has a position in the collection. How would you represent that? I
> >> think this is a particular case of changing a single attribute of a
> >> resource.
> >
> > You might want to consider making the order a property of the list,
> > not of the resources.
>
> Oh sorry, the wording is bad.
>
> The resources within a list are the ones which are draggable, the
> items themselves. So an item has a position in its parent list to
> represent its current position. In this case there's a single list: a
> configurable list of proposal status for a CRM.
>
> So the idea is to be able to represent more or less "move item {id} to
> position {position} and let the ones after {position} adjust their
> index accordingly" .
See <http://greenbytes.de/tech/webdav/rfc3648.html>.
BR, Julian
On Dec 21, 2007, at 5:08 PM, Julian Reschke wrote: > See <http://greenbytes.de/tech/webdav/rfc3648.html>. Thank you. That approach uses custom verbs with custom arguments. Do you think it translates to REST? -- fxn
On Dec 21, 2007, at 4:44 PM, Steve Bjorg wrote: > To represent the position of an item, you have two choices: > 1) let the position be a property of the item > 2) let the position be a property about the item > > In #1, you'll need to modify the item to change the position. > > In #2, you need to modify the entity (i.e. collection) that > describes the position. > > #2 feels more natural than #1, imho. Which kind of URLs would result from #2? -- fxn
Xavier Noria wrote: > > > On Dec 21, 2007, at 5:08 PM, Julian Reschke wrote: > > > See <http://greenbytes. de/tech/webdav/ rfc3648.html > <http://greenbytes.de/tech/webdav/rfc3648.html>>. > > Thank you. > > That approach uses custom verbs with custom arguments. Do you think it > translates to REST? It doesn't necessarily use custom verbs; for instance you can use PUT with the "Position" header to insert a new member at a specific position. That being said there's nothing in REST that says that additional methods are automatically bad, they just need to have some universal use. I understand that people dislike WebDAV for a few reasons, but if you want collections that let the user control the namespace (pick the URIs), then there's really little reason not to use MKCOL/PUT/COPY/MOVE for namespace manipulation (RFC4918). And if, as in this case, you want the collections to preserve their ordering, RFC3648 gives you an interoperable and tested way to do this. Best regards, Julian
On Dec 21, 2007, at 9:03 PM, Julian Reschke wrote:
> I understand that people dislike WebDAV for a few reasons, but if
> you want collections that let the user control the namespace (pick
> the URIs), then there's really little reason not to use MKCOL/PUT/
> COPY/MOVE for namespace manipulation (RFC4918). And if, as in this
> case, you want the collections to preserve their ordering, RFC3648
> gives you an interoperable and tested way to do this.
I agree with you in the sense that my action is essentially a MOVE,
and thus that would be better.
Nevertheless this is a web application and I need to come up with an
API for an ordered collection. The best I've come with by now is
something like this (assume a task list):
# get task list
GET /tasks
# get the first task
GET /tasks/first
# get the last task
GET /tasks/last
# get a task by position
GET /tasks/{position}
# push a new task
POST /tasks
But I have no idea how MOVE fits there, or which would be the right
way to delete a task, because to avoid race conditions with the
position I would like to be able to refer to them by ID as well.
-- fxn
Hey, Have you tried using Atom ? AtomPub is meant to be used for a set/collection of stuff. Only thing you will have to additionally worry about is how to change ordering etc amongst the elements. You could easily do that by a Edit on the the resource(look up the RFC on which URI to use and what verb to use here). The user will just run a Edit on the resource he wants to change, the reordering will be taken care of by the server . Why I am asking you to do this? Because AtomPub would nearly make sure that you are being RESTful and your URI design/use of verbs etc. is all correct, making actual design and implementation very simple. Look up Dan Diephouse's netzooid.com/presentations/atompub_services.ppt for a quick and simple overview of AtomPub. I think it should work fine for you. Regards, dev
So, did this go anywhere afterwards?
I like this. I think the differences between it an POE et al are going
to come down to operational considerations. E.g., POE requires the
server to mint a lot of new URIs, whereas this doesn't. POE is
backwards-compatible with existing forms, this isn't as much (or at
least, browser forms will default to a state where requests can be
repeated).
So, if I were trying to make a Web site have at-most-once semantics
for POSTs, I'd probably use POE.
However, if I were using HTTP for integrating services, and I could
specify client support for extensions, I'd probably choose this.
Make sense?
OTOH, if you were willing to do some JS form.submit() magic, I imagine
you could make this approach work with HTML pretty seamlessly...
On 08/02/2007, at 7:46 AM, Benjamin Carlyle wrote:
> On Sat, 2007-02-03 at 23:24 -0500, Mark Baker wrote:
>> On 2/3/07, Benjamin Carlyle <benjamincarlyle@...> wrote:
>>> So here are the strategies I can think of seeing so far:
>>> 1. Have the user observe some property of the system to determine
>>> whether to retry themselves. In SCADA this might be to observe a
>> change
>>> in voltage before deciding whether or not to retry a circuit-breaker
>>> trip. This can be automated as another SCADA concept: "Target state
>>> monitoring". Regardless of the reponse we recieved, did the resource
>>> actually reach the state we intended?
>> A technique I've used once was to have the client send an HTTP header
>> in the POST request which played a role sort of like a client-side
>> etag with respect to the request body. The server, upon receiving the
>> message and updating the state of the resource, would return another
>> header containing a hash of the last days worth of tags (which wasn't
>> many) on GET requests to that resource so that it could check if
>> *its*
>> update was applied.
>
> I'm starting to like this approach. Let me have a go at rephrasing
> it as
> a concrete proposal:
>
> Problem statement: (same as before)
> I have some state that I want to append to a resource. The right
> method
> according to HTTP is POST, but if I don't get a response to my POST I
> don't know whether or not to retry.
>
> Client algorithm:
> ...
> guid = generateGloballyUniqueID();
> request.addHeader("Client-Etag",guid);
> try
> {
> retryPOST:
> startOrResetTimer(reasonable digest retention period, eg 2min);
> factory.POST(request);
> }
> catch (NoResponse) // aka GatewayTimeout
> {
> etagDigest = factory.GET();
> if (guid in etagDigest)
> {
> // Nothing to be done. The POST was successful.
> }
> else
> {
> // One of two possibilities exist. Either,
> // * our POST didn't arrive, or
> // * our etag has cycled out of the digest
> // We try to ensure that the latter doesn't
> // happen by giving up after a reasonable
> // period.
> goto retryPOST;
> }
> }
> catch (RetentionPeriodTimeout)
> {
> // It is still possible that our etag would be in
> // the digest at this point, so we could do a final
> // GET. If we are in the digest, there is no problem.
> // If we are not in the digest we can no longer assume
> // that it is because our request didn't happen.
> // Our request might have simply cycled out.
> }
> catch (...)
> {
> // Normal error handling
> }
>
> Server constraints:
> * Client etags are stored in the factory as a digest of recent POST
> requests for a reasonable amount of time
> * Only successful requests have their etag stored in the digest, so
> clients can still retry failed requests. Success would generally mean
> that state was successfully appended to the server, though there may
> be
> some corner cases.
>
> Possible efficiency improvements:
> * A URI template might allow the client to query for their specific
> etag, but a protocol would have to be developed for this. Perhaps
> instead of a digest, the factory could return this template. That
> would
> also potentially deal with security issues arising from guids leaking
> from one client to another.
>
> Pros/Cons:
> * In the normal case where the POST does not time out there is very
> little extra communications overhead
> * The server has to store the state of recent successful requests
> for a
> period rather than the state of requests that did not go ahead. ie we
> trade less communications overhead for more server state overhead. On
> the other hand, this server state overhead should be proportional to
> the
> amount of state the server allowed to be appended to itself as part of
> the POST. It doesn't change the fundamental server-side state
> picture...
> just changes the constant.
>
> Cautions
> * Under extreme conditions there could still be a race condition
> between
> a POST arriving at the server and a GET request being issued to the
> factory or template-derived url. This shouldn't really happen if the
> client gives up on the POST under reasonable conditions. Those might
> include "40s has passed", or "I'm using TCP/IP keepalive while
> requests
> are outstanding to monitor our shared communication state, and the HA
> cluster member I was talking to appears to have been replaced by its
> backup, killing my connection". The final case of "I'm using TCP/IP
> keepalive while requests are outstanding, and it simply timed out
> due to
> network conditions" could still be a problem.
>
> Benjamin
>
--
Mark Nottingham http://www.mnot.net/
There's a trend in RESTful web applications to implement the "login" action as "create a session resource". So you point the login form to POST /session That sounds not too bad, but I have mixed feelings because I send a login and a password to create a resource whose representation has nothing to do with logins and passwords. (Please forget for a moment whether a session is RESTful or not itself). My metaphor for thinking in resources is files: either I send the entire representation of a resource or I am not playing by the rules. For example I am not allowed to send a partial representation of a resource meaning "update just these attributes", because REST just allows PUTing the whole resource (please correct me if that's more flexible). So, if I send login and password I send input to construct a resource, but I am not sending the very resource. So it smells as suspicious. What do you think? -- fxn
[ Attachment content not displayed ]
one way to look at this is that you are actually sending a
"credentials resource" to the server:
<credentials>
<user>user1</user>
<password>{hash-of-password}</password>
</credentials>
when you POST this to the server, the server might use that resource
to create a new one - a "session resource" that can be later used for
other purposes:
request:
POST /session
<credentials />
response:
201
Location:/session/{session-id}
Mike A
On Dec 22, 2007 8:21 PM, Xavier Noria <fxn@...> wrote:
> There's a trend in RESTful web applications to implement the "login"
> action as "create a session resource". So you point the login form to
>
> POST /session
>
> That sounds not too bad, but I have mixed feelings because I send a
> login and a password to create a resource whose representation has
> nothing to do with logins and passwords. (Please forget for a moment
> whether a session is RESTful or not itself).
>
> My metaphor for thinking in resources is files: either I send the
> entire representation of a resource or I am not playing by the rules.
> For example I am not allowed to send a partial representation of a
> resource meaning "update just these attributes", because REST just
> allows PUTing the whole resource (please correct me if that's more
> flexible).
>
> So, if I send login and password I send input to construct a resource,
> but I am not sending the very resource. So it smells as suspicious.
>
> What do you think?
>
> -- fxn
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
On Dec 23, 2007, at 3:02 AM, Karen wrote: > On 12/22/07, Xavier Noria <fxn@...> wrote: > > So, if I send login and password I send input to construct a resource, > but I am not sending the very resource. So it smells as suspicious. > > Sounds like you're POSTing to a factory resource, which is not > uncommon in REST. Excellent, I wasn't aware that kind of resources are allowed. This is allowed, and it is allowed to send entire resources. I'll ask now about partial updates in a new thread. -- fxn
In a different thread we've seen you can create resources sending
information that won't end up in their representation (credentials to
create a session in a web application). On the other hand you can of
course send entire representations to create or update a resource.
I wonder about what's allowed regarding partial updates. For example,
"mark this message as read" in the traditional way of thinking may
send an Ajax call to /messages/mark_as_read/{id}, which is RPC-like.
You can as well send just a password and password confirmation to
reset the credentials of a user resource that has 15 additional
attributes you don't send.
In general I would like to have a REST pattern for sending a *subset*
of attributes to be updated if that fits the REST paradigm, or else to
be certain it doesn't.
-- fxn
[ Attachment content not displayed ]
On Dec 23, 2007, at 3:24 PM, Karen wrote:
> Instead of thinking of it as a partial update, try thinking of it as
> a partial resource.
>
> That is, Wirebird does the mark-read function (among some other
> things) using a different resource, called "range" - it's a
> container resource, so the actual messages are given as entries on a
> GET, but it's also a separate resource (even a single-message range)
> from the message, so you can send only the things defined as part of
> the range. In Wirebird's case, since its messages are mailing-list/
> webforum posts, and thus shared, the range includes the read marker
> and eventually any other flags (favoriting, etc.) that are specific
> to a single user.
>
> Likewise with user credentials: you expose the password as its own
> resource, rather than doing a partial update.
That would be
PUT /users/{id}/password
password={password}&password_confirmation={password_confirmation}
?
> If you haven't read the RESTful Web Services book, I strongly
> recommend doing so. Plenty of examples of how solving most REST
> "problems" is really just finding the right resource to expose.
Yes I have read it from cover to cover and understand that. I know
that I can resend the message, I know I can define an ad-hoc resource.
What I am unsure about is what is _disallowed_ or legal but discouraged.
If I had written the password update on my own I would have been
unsure about its correctness, because I am sending data to create a
logical resource (I don't store passwords) that indeed is a backdoor
at the modification of another resource. I can't defend that makes any
sense on my own. In fact it feels like an unnatural design for the
sake of sticking to a pattern.
-- fxn
Xavier Noria wrote: > In general I would like to have a REST pattern for sending a *subset* > of attributes to be updated if that fits the REST paradigm, or else to > be certain it doesn't. > Yes, I'm currently trying to decide how to distinguish between <foo> <bar>23</bar <baz>17</baz> </foo> and <foo> <bar>23</bar </foo> In particular does the latter eliminate baz, or simply ignore it? And what if I need to do both in my app? In my case, the empty string is not the same as null. perhaps I need something like <foo> <bar>23</bar <baz xsi:nil='true'/> </foo> I'm not sure. Suggestions are appreciated. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Elliotte Rusty Harold wrote: > Yes, I'm currently trying to decide how to distinguish between > > <foo> > <bar>23</bar > <baz>17</baz> > </foo> > > and > > <foo> > <bar>23</bar > </foo> > > In particular does the latter eliminate baz, or simply ignore it? And > what if I need to do both in my app? In my case, the empty string is not > the same as null. perhaps I need something like It should eliminate. PUT is *not* partial update. > <foo> > <bar>23</bar > <baz xsi:nil='true' /> > </foo> > > I'm not sure. Suggestions are appreciated. <http://tools.ietf.org/html/draft-dusseault-http-patch-10>, plus a patch format suitable for updating XML. BR, Julian
On Dec 23, 2007, at 6:19 PM, Julian Reschke wrote: > <http://tools.ietf.org/html/draft-dusseault-http-patch-10>, plus a > patch > format suitable for updating XML. I never understood why I'd need both a new verb (PATCH) and a new content type, as PUTting with some 'diff' content type is unambiguous. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Dec 23, 2007, at 12:50 PM, Stefan Tilkov wrote: > I never understood why I'd need both a new verb (PATCH) and a new > content type, as PUTting with some 'diff' content type is unambiguous. > Because it unambigously means that the representation is a diff. That is significantly different from "apply this diff". ....Roy
Elliotte Rusty Harold wrote: > Xavier Noria wrote: > > >> In general I would like to have a REST pattern for sending a *subset* >> of attributes to be updated if that fits the REST paradigm, or else to >> be certain it doesn't. >> >> > > Yes, I'm currently trying to decide how to distinguish between > > <foo> > <bar>23</bar > <baz>17</baz> > </foo> > > and > > <foo> > <bar>23</bar > </foo> > > In particular does the latter eliminate baz, or simply ignore it? And > what if I need to do both in my app? In my case, the empty string is not > the same as null. perhaps I need something like > > <foo> > <bar>23</bar > <baz xsi:nil='true'/> > </foo> > In my app, a subsequent GET may return baz with the value xsi:nil, so I won't say it's eliminated, simply set to that value. In fact, to be more precise, reset is, since some fields have a default value, and sending back the default eliminates a lot of guess work on the client side. So nil/empty resets, and omitting the value doesn't update it. I treat nil/empty the same, seems to work better with HTML forms. Assaf > I'm not sure. Suggestions are appreciated. > > > > >
* bertie_wooster_funny <f2005125@...> [2007-12-22 03:25]: > You could easily do that by a Edit on the the resource(look up > the RFC on which URI to use and what verb to use here). In general, you can only move a single item to the first position in that way. That’s not quite what Xavier was after. You’d need some extension element in entries in a collection. Other than this extension it does indeed seem that Atompub fits very well, as this is a collection manipulation task and manipulating collections is the problem the protocol was designed to solve. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Dec 24, 2007, at 12:38 PM, A. Pagaltzis wrote: > * bertie_wooster_funny <f2005125@...> [2007-12-22 > 03:25]: > > You could easily do that by a Edit on the the resource(look up > > the RFC on which URI to use and what verb to use here). > > In general, you can only move a single item to the first position > in that way. Thats not quite what Xavier was after. > > Youd need some extension element in entries in a collection. > Thank you! I am not familiar with that terminology, would you please explain "extension"? Which URLs and semantics does it have? -- fxn
* Xavier Noria <fxn@...> [2007-12-23 02:25]: > My metaphor for thinking in resources is files: It’s a pretty crude metaphor. A filesystem does not encapsulate computation. > For example I am not allowed to send a partial representation > of a resource meaning "update just these attributes", because > REST just allows PUTing the whole resource (please correct me > if that's more flexible). It’s more nuanced than that. REST foremostly constrains the intent rather than behaviour. PUT basically means that the client is asserting that the new state of the resource corresponds to the representation in the request body. However, the server may honour that request in any way it chooses, incl. deriving parts of the new resource state from previous resource state. What’s crucial, though, is that the client cannot assume that this will be the case, and the server cannot assume that the client intended a partial update. Having said all of that, we’re on the wrong page anyway, as this applies to PUT whereas your initial inquiry was about POST. POST really doesn’t have a lot of semantics (which makes it an escape hatch, but also easy to abuse as a crutch when you’re failing to model a problem correctly in terms of resources). All it means is that the client is asking the server to process the given information somehow. A relatively narrow meaning of that, which is the main purpose of POST and common in practice, is that the client-provided information is to be used to create a new resource. So the design you asked about is perfectly admissible. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Xavier Noria <fxn@...> [2007-12-24 12:55]: > I am not familiar with that terminology, would you please > explain "extension"? Which URLs and semantics does it have? It doesn’t have anything to do with URLs, it’s just a namespaced element or attribute within an Atom Entry document. (The Feed Rank extension draft might be a starting point; but I have to add the disclaimer that I’ve never really looked at it.) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
So, combining Roy's answer ... > From: "Roy T. Fielding" <fielding@...m> > Date: December 23, 2007 10:23:12 PM GMT+01:00 > To: Stefan Tilkov <stefan.tilkov@...> > Cc: REST Discuss <rest-discuss@yahoogroups.com> > Subject: Re: [rest-discuss] how to do partial updates to resources > > On Dec 23, 2007, at 12:50 PM, Stefan Tilkov wrote: > > I never understood why I'd need both a new verb (PATCH) and a new > > content type, as PUTting with some 'diff' content type is > unambiguous. > > > > Because it unambigously means that the representation is a diff. > That is significantly different from "apply this diff". > > ....Roy > ... and Aristotle's (although in another thread) ... > From: "A. Pagaltzis" <pagaltzis@...> > Date: December 24, 2007 1:03:01 PM GMT+01:00 > To: REST Discuss <rest-discuss@yahoogroups.com> > Subject: [rest-discuss] Re: RESTful login in web apps > > PUT basically means that the client is asserting that the new > state of the resource corresponds to the representation in the > request body. However, the server may honour that request in any > way it chooses, incl. deriving parts of the new resource state > from previous resource state. Whats crucial, though, is that the > client cannot assume that this will be the case, and the server > cannot assume that the client intended a partial update. > > ... I understand that for the intent to be clear, there'd have to be a new verb such as PATCH if *any* client should be able to express the intent of a partial update. And although it would be _OK_ for a server to handle a PUT with a 'diff' differently, this would have to be part of a server-specific out-of-band agreement (and a generic client couldn't be held responsible). Or would you say it's _wrong_ for a server to treat a PUT with a diff format this way? Thinks, Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
On Dec 24, 2007 2:27 PM, Stefan Tilkov <stefan.tilkov@...> wrote: > > Or would you say it's _wrong_ for a server to treat a PUT with a diff > format this way? It has no way of setting the state of a resource to a patch body. For example, let's say my copy of Apache supports partial updates, but I also like to keep a directory of in-progress patches on the server. REST prescribes a uniform interface, so that a method's semantics stay the same no matter which resource it is applied to. The meaning of PUT with a diff media type doesn't change when I navigate away from my patch directory. -- Robert Sayre "I would have written a shorter letter, but I did not have the time."
On Dec 24, 2007, at 11:27 AM, Stefan Tilkov wrote: >> From: "A. Pagaltzis" <pagaltzis@...> >> PUT basically means that the client is asserting that the new >> state of the resource corresponds to the representation in the >> request body. However, the server may honour that request in any >> way it chooses, incl. deriving parts of the new resource state >> from previous resource state. Whats crucial, though, is that the >> client cannot assume that this will be the case, and the server >> cannot assume that the client intended a partial update. >> >> > ... I understand that for the intent to be clear, there'd have to be a > new verb such as PATCH if *any* client should be able to express the > intent of a partial update. > And although it would be _OK_ for a server to handle a PUT with a > 'diff' differently, this would have to be part of a server-specific > out-of-band agreement (and a generic client couldn't be held > responsible). > > Or would you say it's _wrong_ for a server to treat a PUT with a diff > format this way? It is wrong. A resource is not just the state -- it is also the mapping from state to representation(s). For example, you can have one resource that is my home page and a second resource that is the text/html version of my home page, and those two resources are different even if they have the same representation most of the time and the same state all of the time. A diff is a representation of how to get from one state to another, not a representation of either of those states. Therefore, it is always wrong to "patch" the resource on a PUT of a diff, as opposed to setting the resource state to the diff. That is why I introduced PATCH in the first drafts of HTTP/1.1. What Aristotle may have been referring to is the ability of a server to take one state update (via PUT) and reflect that update in various ways on later GET. That is because there is no correlation in HTTP between a PUT and any subsequent GET other than that described by the server itself (via etags). For example, if you do a PUT of a diff in context (diff -C3) format, it is reasonable for the server to later supply a different representation of that same diff in the uniform (diff -u) format (assuming anyone ever bothers to register the media types). ....Roy
* Roy T. Fielding <fielding@...> [2007-12-24 23:25]: > What Aristotle may have been referring to is the ability of a server > to take one state update (via PUT) and reflect that update in > various ways on later GET. That is because there is no correlation > in HTTP between a PUT and any subsequent GET other than that > described by the server itself (via etags). For example, if you do > a PUT of a diff in context (diff -C3) format, it is reasonable for > the server to later supply a different representation of that same > diff in the uniform (diff -u) format (assuming anyone ever bothers > to register the media types). That is related (in both cases, the key is that PUT does not mean byte-for-byte storage), but is not quite what I was referring to. An example of what I was thinking of is when a client PUTs a representation that cannot, due to the format used, describe the entire state of the resource. In that case, the server has to fill in the rest of the state somehow, and it may or may not use previous resource state to do so. Also, there may be aspects of resource state that the server does not allow clients to modify, ever. An obvious example of the latter would be the app:edited element in an Entry stored by an Atompub server. But it’s easy to imagine a variation of this element that contains not a datetime, but an edit counter. The new value of this counter after a PUT would then obviously be based on its previous value and would not be based on anything that the client included in its request. So there are various legitimate ways in which new resource state may derive from previous resource state. However, in no such scenario are the semantics of PUT affected. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
* Stefan Tilkov <stefan.tilkov@...> [2007-12-24 20:30]: > ... I understand that for the intent to be clear, there'd have > to be a new verb such as PATCH if *any* client should be able > to express the intent of a partial update. Or in absence of PATCH, if you only have the big five, you can tunnel through POST to a patch resource, which can then be advertised adequately in hypermedia. It’s an ugly way of going about this, but at least POST is explicit about having little intrinsic meaning. > And although it would be _OK_ for a server to handle a PUT with > a 'diff' differently, this would have to be part of a > server-specific out-of-band agreement (and a generic client > couldn't be held responsible). > > Or would you say it's _wrong_ for a server to treat a PUT with > a diff format this way? It’s bad. You switch on the media type to tunnel a different verb over PUT. Even worse in this particular case, the verb you’re using should be idempotent but the verb you are tunnelling is not. Tunnelling via POST is less objectionable. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Roy T. Fielding wrote: > It is wrong. A resource is not just the state -- it is also the > mapping from state to representation(s). For example, you can have > one resource that is my home page and a second resource that is the > text/html version of my home page, and those two resources are > different even if they have the same representation most of the > time and the same state all of the time. A diff is a representation > of how to get from one state to another, not a representation of > either of those states. Therefore, it is always wrong to "patch" > the resource on a PUT of a diff, as opposed to setting the resource > state to the diff. That is why I introduced PATCH in the first > drafts of HTTP/1.1. > I just wish we could have gotten that understanding clear in APP. I'm afraid APP went the other way and allowed some servers to use PUT as PATCH and others to use it as REPLACE, without necessarily telling the client whihc one they were doing. :-( -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
A. Pagaltzis wrote: > It’s bad. You switch on the media type to tunnel a different verb > over PUT. Even worse in this particular case, the verb you’re > using should be idempotent but the verb you are tunnelling is > not. Tunnelling via POST is less objectionable. The idempotence of PATCH depends on the diff format and algorithm. In the system I'm working with now I am using PUT as a sort of PATCH (though I'm reconsidering that) but idempotence is maintained. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On 12/25/07, Elliotte Rusty Harold <elharo@...> wrote: > A. Pagaltzis wrote: > > > It's bad. You switch on the media type to tunnel a different verb > > over PUT. Even worse in this particular case, the verb you're > > using should be idempotent but the verb you are tunnelling is > > not. Tunnelling via POST is less objectionable. > > The idempotence of PATCH depends on the diff format and algorithm. ... and any conditional (If-*) headers. Mark. -- Mark Baker. Ottawa, Ontario, CANADA. http://www.markbaker.ca Coactus; Web-inspired integration strategies http://www.coactus.com
* Elliotte Rusty Harold <elharo@...> [2007-12-25 16:35]: > The idempotence of PATCH depends on the diff format and > algorithm. Yes, just like the idempotence of POST depends on the request body format, the processor algorithm, and various If-*/POE/etc request headers… … but in the end it’s still POST. Let’s say PUT’s idempotency is more idempotent than PATCH’s. :-) Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
> I'm not sure I buy that principle myself. I often find URL construction > to be quite useful. However if an app violates the principle in the way > you suggest here by building URLs from cookies, the application is not > RESTful, for better or worse. And how bad it is, in REST terms, to rely on URL construction on the client library if it's a library only for that specific REST service? Keep in mind that I am the developer of the service and the client library and it's not gonna be public ever. I find myself cheating to embed some algorithm to construct the URI in the client library instead of relying on the server entirely but your post and Roy Fielding's reassure me a bit. -- Lawrence, stacktrace.it - oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
Elliotte Rusty Harold wrote: > I just wish we could have gotten that understanding clear in APP. I'm > afraid APP went the other way and allowed some servers to use PUT as > PATCH and others to use it as REPLACE, without necessarily telling the > client whihc one they were doing. :-( Disagreed. APP is silent on that matter (which is good), so what counts is the definition of PUT (RFC2616). As a matter of fact, this very discussion (over on the APP mailing list) was one of the reasons why James Snell started to work on the orphaned PATCH Internet Draft. BR, Julian
Hi all: Sorry I've been away from the list for a while; life had been calling and I had been ignoring her for too long. :-) Anyway, has any kind of defacto-standard emerged for documenting a (set of) RESTful services? It's become clear I really need to document what I'm building if for no other reason than to be able to keep it all straight in my own head. So in other words, is there a generally accepted format for descibing resources, methods, headers, and content-types they accept on responses; representations, headers, and status codes they respond with; and workflow interactions they perform when orchestrated? Thanks in advance. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
Hi all:
I am working on a client project and I'd like to get some input on a RESTful
design for one aspect. The technical upshot is I am creating a RESTful layer
on a subset of functionality for the tables in the Drupal CMS v5.x using
MySQL and PHP 4.4.7. I'll describe the situation in abstract but also in
specifics since the Drupal schema design is publically known.
At an abstract level I have a table with a list of articles, a second tables
with a list of predefined tags, a third table with a list of predefined tag
categories, and a forth table that joins tags to articles. Each tag is
assigned to a tag category, and each article can have one or more tags where
tags can be applied to any number of articles.
I plan to present the user with a list of HTML <input type="checkbox">
elements, let them select the appropriate tags, and then use AJAX to update
the server via a RESTful web service as the default Drupal UI for managing
terms on nodes is at best archaic and abyssmally inefficient.
On the Apache/PHP/MySQL end there will be a sparse martix, i.e. if an
article has two tags (i.e. 'news' and 'human-interest') there will only be
two records in the "article-tags" table even though it is possible to have
tens of applicable tags for an article type. The web service will likely
need to INSERT any newly assigned tags and DELETE any formerly assigned
tags.
I can envision having either issuing the INSERTs and DELETEs individually as
the user selects and deselects tags, or it could be done on a "batch submit"
basis. I think I'm learning toward the latter to be more consistent with the
way the web normally works, i.e. "Make all changes on a form and then click
submit to saved or just abandon to not save" as my users are not techies. On
the other hand, I'd like to understand the best practice for resource
interaction for both approaches.
Here is what I've come up with along with my justification; I'd appreciate
and critique of my approach along with any specific rationale. BTW, I plan
to use simple 'application/x-www-form-urlencoded' for POST & PUT requests
and JSON for GET and possibly POST and PUT responses.
To assign a tag to an article I could PUT to the following resource since I
know the tag-id in advance, and I assume the following should return a '200
OK':
PUT /articles/{article-id}/tags/{tag-id}
Alternately I could POST to the same URL and get back a '201 Created':
POST /articles/{article-id}/tags/{tag-id}
Which is preferred or is there some better alternate, and why?
To unassign a tag from an article I think I would obviously DELETE the same
resouce and expect back a 200 OK if it worked, right?
DELETE /articles/{article-id}/tags/{tag-id}
On the other hand it seems a bit harder to pin down an obvious approach to
support the "batch submit" user interaction. I assume this is the
appropriate resource URL however I'm really not sure whether to PUT or POST
to it?
/articles/{article-id}/tags
Also, when I PUT or POST should I submit:
1.) Just the tags the user wants assigned after they press submit?
2.) A list of changed tags, i.e. which tags to assign and which tag to
unassign?
3.) All potential tags with each one designated as "assigned" or
"not-assigned"?
4.) Support all three and have a mode value specifying which mode to use?
5.) Or some other approach?
BTW, I'm expecting that the attributes I'll use for the the <input>
element's "id" will be of the format "article_{article-id}_tag_{tag-id}"
where an example might look like: "article_1152_tag_215".
So what you do you think? How do you think it should be best done? Thanks
in advance for helping me think through my first real-world RESTful web api.
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org
P.S. For those familar with Drupal or are for those interested in looking
into Drupal my 'articles' are stored in MySQL using the 'node' table with an
integer primary key of 'nid' and a 'type' field with a value of 'article';
the tag categories are stored in the 'vocabulary' table with an integer
primary key of 'vid' and a 'name' field containing the vocabulary's
human-readable description; the tags themselves are stored in the
'term_data' table with an integer primary key of 'tid', an integer foreign
key to the 'vocabulary' table called 'vid', and a 'name' field for the
term's human-readable description of the term; and the terms (tags) are
related to the nodes (articles) via the 'term_node' table using the integer
foreign keys 'nid' and 'tid'.
MikeS:
Seems like this is all about how to "tag" one or more articles using a
RESTful pattern. I would consider defining a TagArticle resource that
would look something like this:
<article id="{article-id}">
<tag id="{tag1-id}" />
<tag id="{tag2-id" />
...
</article>
This would then be used to inform the server of the tags that apply to
a single article. you can GET, PUT, and DELETE this resource as you
wish. You can use a Microformat approach (see the rel-tag spec) or
even Atom as the mime-type for the document.
To keep things simple, I'd always treat PUT as a complete replacement.
As has been discussed here, you might consider PATCH instead of PUT if
you really want to do partial updates.
MikeA
On Dec 27, 2007 9:47 PM, Mike Schinkel <mikeschinkel@...> wrote:
> Hi all:
>
> I am working on a client project and I'd like to get some input on a RESTful
> design for one aspect. The technical upshot is I am creating a RESTful layer
> on a subset of functionality for the tables in the Drupal CMS v5.x using
> MySQL and PHP 4.4.7. I'll describe the situation in abstract but also in
> specifics since the Drupal schema design is publically known.
>
> At an abstract level I have a table with a list of articles, a second tables
> with a list of predefined tags, a third table with a list of predefined tag
> categories, and a forth table that joins tags to articles. Each tag is
> assigned to a tag category, and each article can have one or more tags where
> tags can be applied to any number of articles.
>
> I plan to present the user with a list of HTML <input type="checkbox">
> elements, let them select the appropriate tags, and then use AJAX to update
> the server via a RESTful web service as the default Drupal UI for managing
> terms on nodes is at best archaic and abyssmally inefficient.
>
> On the Apache/PHP/MySQL end there will be a sparse martix, i.e. if an
> article has two tags (i.e. 'news' and 'human-interest') there will only be
> two records in the "article-tags" table even though it is possible to have
> tens of applicable tags for an article type. The web service will likely
> need to INSERT any newly assigned tags and DELETE any formerly assigned
> tags.
>
> I can envision having either issuing the INSERTs and DELETEs individually as
> the user selects and deselects tags, or it could be done on a "batch submit"
> basis. I think I'm learning toward the latter to be more consistent with the
> way the web normally works, i.e. "Make all changes on a form and then click
> submit to saved or just abandon to not save" as my users are not techies. On
> the other hand, I'd like to understand the best practice for resource
> interaction for both approaches.
>
> Here is what I've come up with along with my justification; I'd appreciate
> and critique of my approach along with any specific rationale. BTW, I plan
> to use simple 'application/x-www-form-urlencoded' for POST & PUT requests
> and JSON for GET and possibly POST and PUT responses.
>
> To assign a tag to an article I could PUT to the following resource since I
> know the tag-id in advance, and I assume the following should return a '200
> OK':
>
> PUT /articles/{article-id}/tags/{tag-id}
>
> Alternately I could POST to the same URL and get back a '201 Created':
>
> POST /articles/{article-id}/tags/{tag-id}
>
> Which is preferred or is there some better alternate, and why?
>
> To unassign a tag from an article I think I would obviously DELETE the same
> resouce and expect back a 200 OK if it worked, right?
>
> DELETE /articles/{article-id}/tags/{tag-id}
>
> On the other hand it seems a bit harder to pin down an obvious approach to
> support the "batch submit" user interaction. I assume this is the
> appropriate resource URL however I'm really not sure whether to PUT or POST
> to it?
>
> /articles/{article-id}/tags
>
> Also, when I PUT or POST should I submit:
>
> 1.) Just the tags the user wants assigned after they press submit?
> 2.) A list of changed tags, i.e. which tags to assign and which tag to
> unassign?
> 3.) All potential tags with each one designated as "assigned" or
> "not-assigned"?
> 4.) Support all three and have a mode value specifying which mode to use?
> 5.) Or some other approach?
>
> BTW, I'm expecting that the attributes I'll use for the the <input>
> element's "id" will be of the format "article_{article-id}_tag_{tag-id}"
> where an example might look like: "article_1152_tag_215".
>
> So what you do you think? How do you think it should be best done? Thanks
> in advance for helping me think through my first real-world RESTful web api.
>
> --
> -Mike Schinkel
> http://www.mikeschinkel.com/blogs/
> http://www.welldesignedurls.org
> http://atlanta-web.org
>
> P.S. For those familar with Drupal or are for those interested in looking
> into Drupal my 'articles' are stored in MySQL using the 'node' table with an
> integer primary key of 'nid' and a 'type' field with a value of 'article';
> the tag categories are stored in the 'vocabulary' table with an integer
> primary key of 'vid' and a 'name' field containing the vocabulary's
> human-readable description; the tags themselves are stored in the
> 'term_data' table with an integer primary key of 'tid', an integer foreign
> key to the 'vocabulary' table called 'vid', and a 'name' field for the
> term's human-readable description of the term; and the terms (tags) are
> related to the nodes (articles) via the 'term_node' table using the integer
> foreign keys 'nid' and 'tid'.
>
>
>
>
> Yahoo! Groups Links
>
>
>
>
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
mike amundsen wrote:
> Seems like this is all about how to "tag" one or more
> articles using a RESTful pattern.
Yup, I think so.
> I would consider defining a
> TagArticle resource
Would "ArticleTag" not be appropriate, or was your choice just arbitrary?
> that would look something like this:
>
> <article id="{article-id}">
> <tag id="{tag1-id}" />
> <tag id="{tag2-id" />
> ...
> </article>
>
> This would then be used to inform the server of the tags that
> apply to a single article. you can GET, PUT, and DELETE this
> resource as you wish. You can use a Microformat approach (see
> the rel-tag spec) or even Atom as the mime-type for the document.
Hmm. I'd *really* like to stay away from the complexity of having to pack
into XML or HTML and parse on both ends. Was your use of XML for an
example just habit, or are you advocating it over the request/response forms
of "application/x-www-form-urlencoded"/"JSON" that I discussed?
> To keep things simple, I'd always treat PUT as a complete replacement.
> ... you might consider PATCH instead or PUT if you really want to do
> partial updates.
So I would PUT my option #1 ("Just the tags the user wants assigned after
they press submit"), and I might also PATCH #2 ("A list of changed tags")
and POST #3 ("All potential tags") if I really need those interactions?
> As has been discussed here, you might consider PATCH instead
I wasn't aware of those discussions, but I will check the archives.
Thanks!
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org
Mike:
I think we're on the same path here. My follow-ups are:
> Would "ArticleTag" not be appropriate, or was your choice just arbitrary?
My choice was totally arbitrary.
> ... Was your use of XML for an
> example just habit, or are you advocating it over the request/response forms
> of "application/x-www-form-urlencoded"/"JSON" that I discussed?
My habit (sorry). JSON makes good sense if you plan on only doing
this via Ajax calls. FWIW, I would point out that JSON is really the
*representation* of your ArticleTag resource. form-encoding is really
another representation of the same resource. XML, Atom, etc are just
other representations. You might keep that in mind just in case -
somewhere down the road - you want to _represent_ your article tags
differently.
> So I would PUT my option #1 ("Just the tags the user wants assigned after
> they press submit"), and I might also PATCH #2 ("A list of changed tags")
> and POST #3 ("All potential tags") if I really need those interactions?
Again, for me, PUT makes the most sense. I am not up-to-speed on
PATCH, check out the specs and the archives for more
(http://www3.tools.ietf.org/html/draft-dusseault-http-patch-10).
Finally, I think of POST as a 'factory' pattern. I use POST when I do
not have a "document name" and am expecting the server to supply one.
I use PUT when I want to allow/require the *client* to select a
document name:
PUT /tags/my-tags (creates a resource named "my-tags" at the /tags/ location
POST /tags/ (creates a resource with a name created by the server
(i.e. /tags/aXcd3)
MikeA
On Dec 27, 2007 11:02 PM, Mike Schinkel <mikeschinkel@...> wrote:
> mike amundsen wrote:
> > Seems like this is all about how to "tag" one or more
> > articles using a RESTful pattern.
>
> Yup, I think so.
>
> > I would consider defining a
> > TagArticle resource
>
> Would "ArticleTag" not be appropriate, or was your choice just arbitrary?
>
> > that would look something like this:
> >
> > <article id="{article-id}">
> > <tag id="{tag1-id}" />
> > <tag id="{tag2-id" />
> > ...
> > </article>
> >
> > This would then be used to inform the server of the tags that
> > apply to a single article. you can GET, PUT, and DELETE this
> > resource as you wish. You can use a Microformat approach (see
> > the rel-tag spec) or even Atom as the mime-type for the document.
>
> Hmm. I'd *really* like to stay away from the complexity of having to pack
> into XML or HTML and parse on both ends. Was your use of XML for an
> example just habit, or are you advocating it over the request/response forms
> of "application/x-www-form-urlencoded"/"JSON" that I discussed?
>
> > To keep things simple, I'd always treat PUT as a complete replacement.
> > ... you might consider PATCH instead or PUT if you really want to do
> > partial updates.
>
> So I would PUT my option #1 ("Just the tags the user wants assigned after
> they press submit"), and I might also PATCH #2 ("A list of changed tags")
> and POST #3 ("All potential tags") if I really need those interactions?
>
> > As has been discussed here, you might consider PATCH instead
>
> I wasn't aware of those discussions, but I will check the archives.
>
> Thanks!
>
> --
>
> -Mike Schinkel
> http://www.mikeschinkel.com/blogs/
> http://www.welldesignedurls.org
> http://atlanta-web.org
>
>
>
--
mca
"In a time of universal deceit, telling the truth becomes a
revolutionary act. " (George Orwell)
I quite agree. I use XML/XSD for much of my server-side validation. I even has routines that convert non-XSD-friendly mime-types into XML just so i can leverage XSD more easily. Not always efficient, but it often helps me limit my scripting/coding. MikeA On Dec 28, 2007 12:43 AM, Berend de Boer <berend@...> wrote: > >>>>> "mike" == mike amundsen <mamund@...> writes: > > >> ... Was your use of XML for an example just habit, or are you > >> advocating it over the request/response forms of > >> "application/x-www-form-urlencoded"/"JSON" that I discussed? > mike> My habit (sorry). JSON makes good sense if you plan on only > mike> doing this via Ajax calls. > > But still, XML allows you to actually validate you got something decent > and not close looking garbage. > > And XML makes it much easier to have your REST services operate with the > rest of the world as JSON parsers are not ubiquitous. > > -- > Cheers, > > Berend de Boer > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
Hi all,
We've just released our first 1.1 milestone. It's a perfect opportunity
to send us feed-back on what seems to be missing or should be improved
in Restlet. The final 1.1 version is due in early Q2 2008.
Here is a summary of the main changes:
- Resource refactoring into a lower-level class (Handler) and
more clearly defined higher-level methods.
- Many improvements to facilitate the usage with Spring
- Added Component's internal router to modularize large applications
- Added RIAP scheme for optimized internal dispatching
- Added built-in HTTP client and server connectors to NRE (BIO)
- Added experimental Grizzly HTTP server (full NIO)
- Added experimental WADL extension to configure components
- Representations can now be exposed via BIO Reader/Writer
- Added a new JAXB extension for easy XML to POJO mappings
Changes log:
http://www.restlet.org/documentation/1.1/changes
Download links:
http://www.restlet.org/downloads/1.1/restlet-1.1-M1.zip
http://www.restlet.org/downloads/1.1/restlet-1.1-M1.exe
Best regards,
--
Jerome Louvel
http://www.noelios.com
Hi,
I'm writing a RESTful application that has provides versioning of
(persisted) entities. I'm using the version identifier as the ETag for
the related resources, so far so good. As the clients update the
entities, they send a PUT request with a IF-MATCH header with their
last version identifier so we can avoid some concurrency problems with
optimistic locking. To create the versioning identifier I'm digesting
(SHA-256) the resource state: I'm exchanging a very small chance of
collision against the chance to detect previously sent updates. When a
client send a PUT request I have the IF-MATCH ETag and I compute the
new ETag automatically. Now suppose this sequence of events:
1. Client A sents PUT ... (IF-MATCH: "XXX") ... FOO ...
2. Server process the request and sends the new ETag: YYY
3. Client B sents PUT ... (IF-MATCH: "YYY") ... BAR
4. Server process the request and sends the new ETag: ZZZ
5. Client A retries the first PUT (network problems, double submit, it
doesn't matter) sends ETag: YYY
My application can compute the new version identifier for FOO and
it will always be YYY, so doing a quick check it finds this version
already persisted and it can safely assume that it's a duplicate
submission. My question is: can it return a 200 to this duplicate
request or it must return a 412, as per section 14.24? AFAICT
returning 200 should be OK, but I don't want to take chances of going
against the Internet ;) or messing with some subtle implicit rule
about caching (e.g. some intermediate cache between both client A and
client B overwrite B's update and become inconsistent).
Best regards,
Daniel Yokomizo.
[ Attachment content not displayed ]
Daniel Yokomizo wrote: > > > Hi, > > I'm writing a RESTful application that has provides versioning of > (persisted) entities. I'm using the version identifier as the ETag for > the related resources, so far so good. As the clients update the > entities, they send a PUT request with a IF-MATCH header with their > last version identifier so we can avoid some concurrency problems with > optimistic locking. To create the versioning identifier I'm digesting > (SHA-256) the resource state: I'm exchanging a very small chance of > collision against the chance to detect previously sent updates. When a > client send a PUT request I have the IF-MATCH ETag and I compute the > new ETag automatically. Now suppose this sequence of events: > > 1. Client A sents PUT ... (IF-MATCH: "XXX") ... FOO ... > 2. Server process the request and sends the new ETag: YYY > 3. Client B sents PUT ... (IF-MATCH: "YYY") ... BAR > 4. Server process the request and sends the new ETag: ZZZ > 5. Client A retries the first PUT (network problems, double submit, it > doesn't matter) sends ETag: YYY > > My application can compute the new version identifier for FOO and > it will always be YYY, so doing a quick check it finds this version > already persisted and it can safely assume that it's a duplicate > submission. My question is: can it return a 200 to this duplicate > request or it must return a 412, as per section 14.24? AFAICT > returning 200 should be OK, but I don't want to take chances of going > against the Internet ;) or messing with some subtle implicit rule > about caching (e.g. some intermediate cache between both client A and > client B overwrite B's update and become inconsistent) . Could you elaborate why you think not sending a 412 would be ok? It seems 14.24 is clear enough: "If none of the entity tags match, or if "*" is given and no current entity exists, the server MUST NOT perform the requested method, and MUST return a 412 (Precondition Failed) response. This behavior is most useful when the client wants to prevent an updating method, such as PUT, from modifying a resource that has changed since the client last retrieved it." -- <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.14.24.p.5> So yes, PUT isn't idempotent when used as a conditional request. BR, Julian
On Dec 28, 2007 1:40 PM, Julian Reschke <julian.reschke@...> wrote: > > Daniel Yokomizo wrote: > > > > Hi, > > > > I'm writing a RESTful application that has provides versioning of > > (persisted) entities. I'm using the version identifier as the ETag for > > the related resources, so far so good. As the clients update the > > entities, they send a PUT request with a IF-MATCH header with their > > last version identifier so we can avoid some concurrency problems with > > optimistic locking. To create the versioning identifier I'm digesting > > (SHA-256) the resource state: I'm exchanging a very small chance of > > collision against the chance to detect previously sent updates. When a > > client send a PUT request I have the IF-MATCH ETag and I compute the > > new ETag automatically. Now suppose this sequence of events: > > > > 1. Client A sents PUT ... (IF-MATCH: "XXX") ... FOO ... > > 2. Server process the request and sends the new ETag: YYY > > 3. Client B sents PUT ... (IF-MATCH: "YYY") ... BAR > > 4. Server process the request and sends the new ETag: ZZZ > > 5. Client A retries the first PUT (network problems, double submit, it > > doesn't matter) sends ETag: YYY > > > > My application can compute the new version identifier for FOO and > > it will always be YYY, so doing a quick check it finds this version > > already persisted and it can safely assume that it's a duplicate > > submission. My question is: can it return a 200 to this duplicate > > request or it must return a 412, as per section 14.24? AFAICT > > returning 200 should be OK, but I don't want to take chances of going > > against the Internet ;) or messing with some subtle implicit rule > > about caching (e.g. some intermediate cache between both client A and > > client B overwrite B's update and become inconsistent) . > > Could you elaborate why you think not sending a 412 would be ok? It > seems 14.24 is clear enough: > > "If none of the entity tags match, or if "*" is given and no current > entity exists, the server MUST NOT perform the requested method, and > MUST return a 412 (Precondition Failed) response. This behavior is most > useful when the client wants to prevent an updating method, such as PUT, > from modifying a resource that has changed since the client last > retrieved it." -- > <http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.14.24.p.5> I wasn't entirely clear. Arguably all the versions of the entity are current, as the user may get older versions and at a future date branch on old versions. Instead of creating a protocol to handle versioning above HTTP I thought using the existing versioning support would be better. > So yes, PUT isn't idempotent when used as a conditional request. Where is this specified? Or is a known convention? In this situation the server knows that this is a repeated submission and it's semantically safe to ignore it. I could just drop IF-MATCH in this case and make the client send the version identifier in the request body, but I'm not sure the problem would be much different. If I ignore IF-MATCH issues the same situation can happen and the server can safely ignore the duplicate PUT, but I would still ask the same question. > BR, Julian Best regards, Daniel Yokomizo.
* Julian Reschke <julian.reschke@...> [2007-12-28 14:45]: > So yes, PUT isn't idempotent when used as a conditional request. … eh? Idempotency is defined in terms of side effects, and a repeatedly sent conditional PUT certainly *does not* have cumulative side effects. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Side-effects aside. The original question is one I'd like to see answered as well. If the server returns a 4xx response, I would expect the client to fetch the most recent ETag and try again, which is unnecessary overhead. Question is: if nobody can make a definitive observation that the client's pre-condition failed, should the server still report it? - Steve -------------- Steve G. Bjorg http://wiki.mindtouch.com http://wiki.opengarden.org On Dec 28, 2007, at 7:36 AM, Aristotle Pagaltzis wrote: > * Julian Reschke <julian.reschke@...> [2007-12-28 14:45]: > > So yes, PUT isn't idempotent when used as a conditional request. > > eh? > > Idempotency is defined in terms of side effects, and a repeatedly > sent conditional PUT certainly *does not* have cumulative side > effects. > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> > >
Daniel Yokomizo wrote: > I wasn't entirely clear. Arguably all the versions of the entity are > current, as the user may get older versions and at a future date > branch on old versions. Instead of creating a protocol to handle That doesn't make them "current". They may be current for other URLs. Or do you want to serve those older versions from the same URL? In that case it would be correct, and it would be a case of content negotiation (incl Vary header etc). > versioning above HTTP I thought using the existing versioning support > would be better. There already is a protocol for versioning over HTTP (RFC3253), so you don't need to invent anything new. > > So yes, PUT isn't idempotent when used as a conditional request. > > Where is this specified? Or is a known convention? > > In this situation the server knows that this is a repeated submission > and it's semantically safe to ignore it. I could just drop IF-MATCH in I don't see why it would be safe to ignore it. If a client puts FOO, then BAR, then FOO, ignoring the last one will not result in the state expected. > this case and make the client send the version identifier in the > request body, but I'm not sure the problem would be much different. If > I ignore IF-MATCH issues the same situation can happen and the server > can safely ignore the duplicate PUT, but I would still ask the same > question. It seems that the main issue here is ignoring the PUT, just because the content was sent before. How can you assume this is ok -- maybe the client intended to overwrite the second version with the first one? BR, Julian
Aristotle Pagaltzis wrote: > > > * Julian Reschke <julian.reschke@ gmx.de > <mailto:julian.reschke%40gmx.de>> [2007-12-28 14:45]: > > So yes, PUT isn't idempotent when used as a conditional request. > > … eh? > > Idempotency is defined in terms of side effects, and a repeatedly > sent conditional PUT certainly *does not* have cumulative side > effects. Yes, indeed (when it has the *same* conditional headers, but of course that's the only definition that makes sense here). BR, Julian
* Daniel Yokomizo <daniel.yokomizo@...> [2007-12-28 14:30]: > 1. Client A sents PUT ... (IF-MATCH: "XXX") ... FOO ... > 2. Server process the request and sends the new ETag: YYY > 3. Client B sents PUT ... (IF-MATCH: "YYY") ... BAR > 4. Server process the request and sends the new ETag: ZZZ > 5. Client A retries the first PUT (network problems, double submit, it > doesn't matter) sends ETag: YYY I don’t understand this sequence. You say the client sends ETag XXX in step #1, but when it retries the request in step #5, it sends ETag YYY. Did you make a typo somewhere or did you omit things from your timeline? > My application can compute the new version identifier for FOO and > it will always be YYY, so doing a quick check it finds this version > already persisted and it can safely assume that it's a duplicate > submission. My question is: can it return a 200 to this duplicate > request or it must return a 412, as per section 14.24? AFAICT > returning 200 should be OK, but I don't want to take chances of going > against the Internet ;) or messing with some subtle implicit rule > about caching (e.g. some intermediate cache between both client A and > client B overwrite B's update and become inconsistent). A resource cannot have multiple equally valid ETags at the same time if intermediaries are supposed to understand its current state. It can have several variants, each of which may have their own ETag, but then you need to include a Vary response header to let intermediaries know how what to look at in client requests in order to find out which variant is being requested. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Hi Hari, Concerning the Restlet framework (Java), which is covered in the chapter 12 of the book, you can find further documentation on our web site, like our tutorial: http://www.restlet.org/documentation/1.0/tutorial You can also seek help in our discussion list: http://www.restlet.org/community/lists Best regards, Jerome http://www.restlet.org Hari Dhanakoti wrote : > > Hi all, > > I am a new bie in RESTful webservices and i am looking for resources > for reading. I had read some of the Orilley's Publications but i could > hardly follow certain steps only. I am looking forward for the RESTful > webservices in Java. > > So i hope u people can suggest me some books r articles related to the > RESTful webservices.. > > > Thanks in Advance > > -- > Regards > > Hari. D
On Dec 28, 2007 4:08 PM, Aristotle Pagaltzis <pagaltzis@...> wrote: > * Daniel Yokomizo <daniel.yokomizo@...> [2007-12-28 14:30]: > > 1. Client A sents PUT ... (IF-MATCH: "XXX") ... FOO ... > > 2. Server process the request and sends the new ETag: YYY > > 3. Client B sents PUT ... (IF-MATCH: "YYY") ... BAR > > 4. Server process the request and sends the new ETag: ZZZ > > 5. Client A retries the first PUT (network problems, double submit, it > > doesn't matter) sends ETag: YYY > > I don't understand this sequence. You say the client sends ETag > XXX in step #1, but when it retries the request in step #5, it > sends ETag YYY. Did you make a typo somewhere or did you omit > things from your timeline? Ops, it's a typo indeed. It should be ETag XXX. I hope this clears remaining confusion for everyone. It's supposed to be a simple retry of the first PUT, (e.g. client A never saw the response and want to ensure everything is fine). In the meantime Client B saw A's changes (because he has the unforgeable YYY ETag) and submits it's own changes. > > My application can compute the new version identifier for FOO and > > it will always be YYY, so doing a quick check it finds this version > > already persisted and it can safely assume that it's a duplicate > > submission. My question is: can it return a 200 to this duplicate > > request or it must return a 412, as per section 14.24? AFAICT > > returning 200 should be OK, but I don't want to take chances of going > > against the Internet ;) or messing with some subtle implicit rule > > about caching (e.g. some intermediate cache between both client A and > > client B overwrite B's update and become inconsistent). > > A resource cannot have multiple equally valid ETags at the same > time if intermediaries are supposed to understand its current > state. It can have several variants, each of which may have their > own ETag, but then you need to include a Vary response header to > let intermediaries know how what to look at in client requests in > order to find out which variant is being requested. Yes, I'm assuming Vary for the current URI and no Vary for the specific revisions. The URLs are: /<resources>/<id> (with Vary) /<resources>/<id>/<version> (without Vary) /<resources>/<id>/current (without Vary and using appropriate Cache directives to avoid caching) PUT is allowed only on the main URL (it could be allowed on the current too, but it isn't) because we can't put in a specific version: it's supposed to be immutable. > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/> Best regards, Daniel Yokomizo.
Daniel Yokomizo wrote: > Yes, I'm assuming Vary for the current URI and no Vary for the > specific revisions. The URLs are: > > /<resources> /<id> (with Vary) > /<resources> /<id>/<version> (without Vary) > /<resources> /<id>/current (without Vary and using appropriate Cache > directives to avoid caching) > > PUT is allowed only on the main URL (it could be allowed on the > current too, but it isn't) because we can't put in a specific version: > it's supposed to be immutable. So, what do you return in the Vary header? Confused, Julian
On Dec 28, 2007, at 8:37 AM, Daniel Yokomizo wrote: > On Dec 28, 2007 4:08 PM, Aristotle Pagaltzis <pagaltzis@...> wrote: > > * Daniel Yokomizo <daniel.yokomizo@...> [2007-12-28 14:30]: > > > 1. Client A sents PUT ... (IF-MATCH: "XXX") ... FOO ... > > > 2. Server process the request and sends the new ETag: YYY > > > 3. Client B sents PUT ... (IF-MATCH: "YYY") ... BAR > > > 4. Server process the request and sends the new ETag: ZZZ > > > 5. Client A retries the first PUT (network problems, double > submit, it > > > doesn't matter) sends ETag: YYY > > > > I don't understand this sequence. You say the client sends ETag > > XXX in step #1, but when it retries the request in step #5, it > > sends ETag YYY. Did you make a typo somewhere or did you omit > > things from your timeline? > > Ops, it's a typo indeed. It should be ETag XXX. I hope this clears > remaining confusion for everyone. It's supposed to be a simple retry > of the first PUT, (e.g. client A > never saw the response and want to ensure everything is fine). In the > meantime Client B saw A's changes (because he has the unforgeable YYY > ETag) and submits it's own changes. > In that case, client A definitely wants to receive the 412, since otherwise it will never know about client B's changes. A more interesting question is whether the spec over-constrains the case where a PUT is successful but tried again. In other words, should the server be allowed to accept the PUT if the etag differs but the current state matches what is being PUT? Subversion handles such cases nicely because it is common for two developers to patch the same bugs. I think the "MUST respond with 412" is yet another case of a bogus requirement being added in 2616. Note: this is an HTTP spec issue, not a REST issue. ....Roy
mike amundsen wrote:
> I think we're on the same path here. My follow-ups are:
> > Would "ArticleTag" not be appropriate, or was your choice
> just arbitrary?
> My choice was totally arbitrary.
>
> > So I would PUT my option #1 ("Just the tags the user wants assigned
> > after they press submit"), and I might also PATCH #2 ("A list of
> > changed tags") and POST #3 ("All potential tags") if I
> really need those interactions?
>
> Again, for me, PUT makes the most sense. I am not up-to-speed
> on PATCH, check out the specs and the archives for more
> (http://www3.tools.ietf.org/html/draft-dusseault-http-patch-10).
>
> Finally, I think of POST as a 'factory' pattern. I use POST
> when I do not have a "document name" and am expecting the
> server to supply one.
> I use PUT when I want to allow/require the *client* to select
> a document name:
>
> PUT /tags/my-tags (creates a resource named "my-tags" at the
> /tags/ location POST /tags/ (creates a resource with a name
> created by the server (i.e. /tags/aXcd3)
Thanks! That was all very helpful.
> > ... Was your use of XML for an
> > example just habit, or are you advocating it over the
> request/response
> > forms of "application/x-www-form-urlencoded"/"JSON" that I
> discussed?
>
> My habit (sorry). JSON makes good sense if you plan on only
> doing this via Ajax calls. FWIW, I would point out that JSON
> is really the
> *representation* of your ArticleTag resource. form-encoding
> is really another representation of the same resource. XML,
> Atom, etc are just other representations. You might keep that
> in mind just in case - somewhere down the road - you want to
> _represent_ your article tags differently.
Yeah, as the fact that part of the project could get refactored out into a
generic REST library for PHP I have in mind to offer the requestor the
ability to request different representation types when needed, while
providing JSON as the default. Of course since my project's only need for
this is for AJAX I don't really have a need for those other representation
types at the moment.
--
-Mike Schinkel
http://www.mikeschinkel.com/blogs/
http://www.welldesignedurls.org
http://atlanta-web.org
Berend de Boer > But still, XML allows you to actually validate you got Thanks for the comment. I would like to make a counter-point though. While I was a proponent of XML in the early days, I've since come to see it as more of a harm than good in many cases, especially when namespaces and validation are incorporated. I think that's one reason TimBL ended up reversing direction on HTML [1]. It's biggest benefit is that it has been widely deployed but its complexity (namespaces, schemas, XSLT, etc.) has made for spotty tool support and lack of universally practiced approaches. Considered from the perspective of the Robustness principle a.ka. Postel's Law [2], use of (especially validated) XML is less than ideal because it puts a burden of conservatism on others. If I have a web service that requires the client validate then after they've done that and deployed the code I can't change the representation that the service serves even if the change would not otherwise affect them. That to me is anathema. Using XML only seems proper to me when you control development of both ends of the client and server, when any deviance in prior behavior should trigger an error, when every error should be reported immediately to the authority overseeing the system, and when the operation of the system should be halted until the deviation is discovered and corrected. Okay maybe I'm being a bit extreme in my ideal criteria for XML but I was trying to make a point, and the point is that XML seems (to me) to be best used for internal "enterprisy" applications, and even then I question its real benefits. > something decent and not close looking garbage. Personally I find the verbosity and all the extranious syntax of XML to be harder to read than JSON, not easier. > And XML makes it much easier to have your REST services How so? It seems easier to me to process JSON vs. XML, especially in a browser and in PHP v4. > operate with the rest of the world as JASON parsers are not ubiquitous. But writing a JSON parsers is trival, is it not? Far easier than writing an XML parser for anything other than the most trivial of XML, right? mike amundsen wrote: > I use XML/XSD for much of my server-side validation. I even > has routines that convert non-XSD-friendly mime-types into > XML just so i can leverage XSD more easily. Doesn't that make you more conservative in what you accept rather than more liberal? -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org P.S. BTW, I'm arguing for JSON more as an agnostic just as a religious agnostic would argue for a God with an atheist. But I am arguing against XML as a relious atheist would argue against a God with a theist or otherwise. [1] http://dig.csail.mit.edu/breadcrumbs/node/166 [2] http://en.wikipedia.org/wiki/Robustness_Principle
On Dec 28, 2007, at 10:59 PM, Mike Schinkel wrote: > It's biggest benefit is that it has been widely deployed but its > complexity > (namespaces, schemas, XSLT, etc.) has made for spotty tool support > and lack > of universally practiced approaches. XML has "spotty tool support"? Are there platforms that don't support XML? I'm all for reasonable alternatives to XML where they make sense, but the single most significant benefit for XML seems to be the excellent support in terms of tools and libraries. Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
* Mike Schinkel <mikeschinkel@...> [2007-12-28 23:00]: > I think that's one reason TimBL ended up reversing direction on > HTML <http://dig.csail.mit.edu/breadcrumbs/node/166> http://plasmasturm.org/log/447/ > It's biggest benefit is that it has been widely deployed but > its complexity (namespaces, schemas, XSLT, etc.) All of these are opt-in. Although I think well-practiced namespaces (mustIgnore et al) are a huge win. Actually XSLT is not even opt-in, it’s completely orthogonal (well, unless you serve transformations to clients as a sort of code-on-demand approach, I suppose – a rather hypothetical scenario). > has made for spotty tool support and lack of universally > practiced approaches. I’m with Stefan here – I seriously fail to see this. > Considered from the perspective of the Robustness principle > a.ka. Postel's Law [2], use of (especially validated) XML is > less than ideal because So don’t. You’re not the first to say that having documents declare their type and thus how to validate them is backwards, nor is anything you say news to anyone who has bought into Schematron. > Okay maybe I'm being a bit extreme in my ideal criteria for > XML but I was trying to make a point, and the point is that > XML seems (to me) to be best used for internal "enterprisy" > applications, and even then I question its real benefits. XML is best used whenever your data is not completely rigidly structured and clients are wildly heterogenous. (Of course, just because you’re using XML doesn’t automatically mean you’ll end up with a well thought-out vocabulary. Most ad-hoc vocabs are mediocre or worse. That’s hardly XML’s fault.) > Personally I find the verbosity and all the extranious syntax > of XML to be harder to read than JSON, not easier. http://www.megginson.com/blogs/quoderat/2007/01/03/all-markup-ends-up-looking-like-xml/ Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Stefan Tilkov wrote: >> Are there platforms that don't support XML? Many language platforms that I am aware of cannot load XML as an object where its elements and attributes because object properties. PHP v4, still used by leading CMS packages, doesn't provide broad XML support. Deployed browsers differ on levels of XML support, don't support loading XML as an object in the same manner that JSON does, and browsers don't provide full and easy support for all the baggage that XML often brings. Sure JSON doesn't have many of the capabilities that XML has, but then in many cases you don't need it. >> XML has "spotty tool support"? Don't take that as implying anything else is better toolwise, only that other solutions don't force the need for as many tools. Basically, please don't get too ideologically worked up by make supposition; I was primarily trying to say that XML can both enforce rigidity where flexibility is a virtue and can also be a very heavy option when a lightweight solution is preferred. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
FWIW, here's my current position regarding the place of XML vis-a-vis the Web: I am constantly looking for *engines* to help solve problems. I like regular expressions for this reason. it's a very simple engine that allows for a great deal of work to be done quickly and reliably. I like XSLT for the same reason. Very little compiled code is needed in order to accomplish tasks. XSD has the same advantage, IMHO. I also view T-SQL as a useful engine for handling data. Typically, it takes more resources to bootstrap T-SQL, but with the continued growith of small relational engines (ala Gears) this is still a very valuable engine to leverage. Any time I can use an engine to complete a task, I'll do that before I start to write script or code. And right now the XML family offers a great number of useful engines. And I can count on consistent performance on any platform and almost any language. Mike A On Dec 28, 2007 6:00 PM, Mike Schinkel <mikeschinkel@...> wrote: > Stefan Tilkov wrote: > >> Are there platforms that don't support XML? > > Many language platforms that I am aware of cannot load XML as an object > where its elements and attributes because object properties. > > PHP v4, still used by leading CMS packages, doesn't provide broad XML > support. > > Deployed browsers differ on levels of XML support, don't support loading XML > as an object in the same manner that JSON does, and browsers don't provide > full and easy support for all the baggage that XML often brings. Sure JSON > doesn't have many of the capabilities that XML has, but then in many cases > you don't need it. > > >> XML has "spotty tool support"? > > Don't take that as implying anything else is better toolwise, only that > other solutions don't force the need for as many tools. > > Basically, please don't get too ideologically worked up by make supposition; > I was primarily trying to say that XML can both enforce rigidity where > flexibility is a virtue and can also be a very heavy option when a > lightweight solution is preferred. > > -- > -Mike Schinkel > http://www.mikeschinkel.com/blogs/ > http://www.welldesignedurls.org > http://atlanta-web.org > > > > > > > Yahoo! Groups Links > > > > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
Aristotle Pagaltzis wrote: > http://plasmasturm.org/log/447/ You make a falicitous assumption that the 'public' wanted to move to XHTML when (IMO) it was only a tiny percentage that really wanted to move. And I'll argue that having the browser support it strictly (and strictness is what I'm arguing against) would have resulting in too many people seeing the web as broken which an important contingent (that mainly includes Microsoft) simply won't let that happen. So you can hypothesize, but your hypothecies don't match that which could be the only realisitic reality. > > It's biggest benefit is that it has been widely deployed but its > > complexity (namespaces, schemas, XSLT, etc.) > > All of these are opt-in. There are not opt-in when a web service that I need to consume requires me to use them. That's why I said my opinion was that XML with all its baggage works best when you control the development of both the client and the server. > Although I think well-practiced > namespaces (mustIgnore et al) are a huge win. And I think namespaces as implemented in XML are a pox on the web and software/development in general. > Actually XSLT > is not even opt-in, it's completely orthogonal (well, unless > you serve transformations to clients as a sort of > code-on-demand approach, I suppose - a rather hypothetical scenario). Again, you are making my previous point. Now don't take this as me saying "Don't use XML" just take it as me instead saying "Please don't force me to consume XML as my only option" and "Don't admonish me for choosing to serve JSON." > I'm with Stefan here - I seriously fail to see this. See my reply to Stefan. > So don't. You're not the first to say that having documents > declare their type and thus how to validate them is > backwards, nor is anything you say news to anyone who has > bought into Schematron. Okay... > XML is best used whenever your data is not completely rigidly > structured and clients are wildly heterogenous. Huh? That's my argument for when it is worse used, at least with namespaces and validation. If you are arguing that it is best used in contexts where namespaces and validation are not used I'll agree it's not harmful but will save that it requires a lot of overhead to download and parse both from all the extra characters and the fact that there are direct-to-object parsers are not common. > (Of course, just because you're using XML doesn't > automatically mean you'll end up with a well thought-out > vocabulary. Most ad-hoc vocabs are mediocre or worse. That's > hardly XML's fault.) Is that the "Guns don't kill people, people do" argument? > > Personally I find the verbosity and all the extranious > syntax of XML > > to be harder to read than JSON, not easier. > > http://www.megginson.com/blogs/quoderat/2007/01/03/all-markup- > ends-up-looking-like-xml/ I'll see your URL and raise you one: http://www.25hoursaday.com/weblog/PermaLink.aspx?guid=d639d908-7fbc-40cd-8e3 6-e6d48c07f659 http://www.25hoursaday.com/weblog/PermaLink.aspx?guid=39842a17-781a-45c8-ade 5-58286909226b http://www.pluralsight.com/blogs/dbox/archive/2007/01/03/45560.aspx But back to your URL: Quoting Dare Obasanjo: ====== You XML folks completely miss the point. JSON is important because it is better supported in the browser than XML. That's why it has taken hold. Arguing that angle brackets, S-expressions and JSON syntax are all semantically equivalent is the height of architecture astronautics[0] and COMPLETELY misses the point. ====== Quoting David Megginson: ====== all modern browsers have support built-in for parsing XML safely, though the interface they present to the programmer (DOM) is low-level and awkward, so in practice, you have to install some kind of separate library to simplify processing. ... I don't claim that anyone else should have exactly the same experience as me, but it is worth noting that this is a pretty heavily subjective area. ====== Quoting John D. Mitchell: ====== While the nominal complicatedness is similar, note the clarity differences - particularly in the last, most complicated example. The xml example is almost completely un-scannable and so has to be (slowly, carefully) read. The Lisp is easy/fast to scan. The JSON is somewhere in-between. ... I think you're helping to prove my point. You've been "reading" XML for so long that you think that it's scannable. Seriously, there's a huge difference between in the ability to carefully read something and the ability to glance at it and get it that is, IMHO, grossly underestimated by advocates of "dense" languages. This cost goes up as the length and complicatedness of the documents increases. ====== Quoting Masklinn: ====== Wow. David, have you at no point considered refactoring your Lisp and JSON markups? They're horrible, you're piling crap on crap it's idiotic. ====== Quoting rektide: ====== the thing I like about JSON is that it maps to a object oriented data structure, and the tooling surrounding it is just about serial/deserialization. in comparison, the DOM data structure used by XML is immensely frustrating to deal with. multiple text node children hanging off elements, content validation before you can .InnerText, theres just a never ending rigamaroll to do what is ultimately a very very simple task, and i for one do not enjoy writing such verbose kludgtastic code. the data structures built by xml tooling just suck horrendous ass. ====== Quoting David Megginson again: ==== rektide: I think you hit on a major problem with XML in the browser - it's not XML itself, but the DOM, which is an extremely awkward interface to use. Part of that is because DOM has to be able to handle mixed content as well as fielded data, but a lot of it is simply the DOM's design history. ==== And finally quoting Andrzej Taramina who quotes HL Mencken: ==== Might be worth considering 's thoughts on XML vs JSON vs LISP et all: "We must accept the other fellow's religion, but only in the sense and to the extent that we respect his theory that his wife is beautiful and his children smart." XML vs JSON vs LISP, that would be the "religion" part. ==== Gotta love the ending of rektide's quote about xml tooling sucking horrendous ass. '-) But I think his point is that all the formats have benefits and XML may have some benefits as complexities grow which I won't debate. OTOH, I like to minimize complexity in web services as much as possible. Anyway, gotta love it when someone argues with someone else about that person's opinion on what is usable for them... '-) BTW, if there is one thing that can really get my hackles up, its when people argue that syntactical differences are just sugar... As an aside, in reference to your comment on Messingson's blog: "I used to hate XML. Then I discovered XPath." For JSON it's called jQuery. Hey, if the XML folks can argue for finding tools that make it easier, why can't we? '-) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
mike amundsen wrote: > FWIW, here's my current position regarding the place of XML > vis-a-vis the Web: > > I am constantly looking for *engines* to help solve > problems. I like regular expressions for this reason. it's a > very simple engine that allows for a great deal of work to be > done quickly and reliably. I like XSLT for the same reason. > Very little compiled code is needed in order to accomplish > tasks. XSD has the same advantage, IMHO. In response my position is that both RegEx and XSLT are too complex and too fragile to offer solutions on a broad scale. They are really only great solutions for top-tier developers where those developers encapsulatie their work and never require average developers to have to interact with RegEx and XSLT (think of mod_rewrite as a great counter-example.) IOW, RegEx and XSLT let me consider things to be nails, but I'd really like to have tools that are better designed for things that are not nails. And I say this after building a very complex XLST solution a few years ago that in hindsight I could have probably done in another manner in 1/10th the time and had far less problems with it. I can also say the same of trying to use SOAP many years ago when I really should have been using (something like) REST. > I also view T-SQL as a useful engine for handling data. > Typically, it takes more resources to bootstrap T-SQL, but > with the continued growith of small relational engines (ala > Gears) this is still a very valuable engine to leverage. There's where we can agree. I've used T-SQL for over 10 years and for 10 years prior to that worked with and wrote a book about[1] record-based data processing languages (xBase.) Today I'm using the MySQL variant of SQL instead of T-SQL, but they are conceptually close. I do pine for many features of T-SQL/SQL Server though. OTOH, I don't think I'd use SQL as a web-service content type, expect maybe from the server to the client ala Google Gears, but certainly not as something a client gets to send to a server. Also one day I plan to blog a rant aboutu how I feel the poor choice in initial SQL language design has hampered maintainability and reliability including criticizing the required ordering of FROM WHERE, GROUP BY, and ORDER BY clauses, criticizing the use of commas as field seperators, and criticizing how INSERT and UPDATE have incompatible syntax as well as criticizing implementations that don't support arguments of SQL clauses (field lists, join structures, and criteria expressions) as reusable first-class objects. > Any time I can use an engine to complete a task, I'll do that > before I start to write script or code. Ditto, but... > And right now the > XML family offers a great number of useful engines. And I can > count on consistent performance on any platform and almost > any language. ...they must be deployed and JSON parsing directly to objects is deployed by default in browsers. As a way to elaborate I'll employ Aristotle Pagaltzis technique and paste a few links that I already presented him in an earlier email: http://www.25hoursaday.com/weblog/PermaLink.aspx?guid=39842a17-781a-45c8-ade 5-58286909226b http://www.pluralsight.com/blogs/dbox/archive/2007/01/03/45560.aspx -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
> Basically, please don't get too ideologically worked up by make > supposition; > I was primarily trying to say that XML can both enforce rigidity where > flexibility is a virtue and can also be a very heavy option when a > lightweight solution is preferred. Well said. I agree. Subbu
* Mike Schinkel <mikeschinkel@...> [2007-12-29 01:35]: > Aristotle Pagaltzis wrote: > > http://plasmasturm.org/log/447/ > > You make a falicitous assumption that the 'public' wanted to > move to XHTML when (IMO) it was only a tiny percentage that > really wanted to move. I never said the public wanted to. I said some of them have, which is what you say; the rest didn’t care. They (or at least, a significant portion of them) would have moved, had there been any tangible payoff – had XHTML offered features that they didn’t already have. But it doesn’t; only programmers with XML tools at their disposal had a reason to move, whereas everyone else had lots of reason not to. > And I'll argue that having the browser support it strictly (and > strictness is what I'm arguing against) would have resulting in > too many people seeing the web as broken which an important > contingent (that mainly includes Microsoft) simply won't let > that happen. I think MSFT of all entities would be more than happy to let the web appear broken. If you think the web is broken, Redmond has a Silverlight engine to sell you… > So you can hypothesize, but your hypothecies don't match that > which could be the only realisitic reality. You cannot do anything but hypothesize either, and between your hypothesis and mine, I think mine is less hypothetical… but all we’re doing is hypothesizing. > > (Of course, just because you're using XML doesn't > > automatically mean you'll end up with a well thought-out > > vocabulary. Most ad-hoc vocabs are mediocre or worse. That's > > hardly XML's fault.) > > Is that the "Guns don't kill people, people do" argument? Well just because you use JSON doesn’t mean you’ll end up with a well-designed data structure either. Too little or too much nesting, extensibility provisions and the like are all concerns you’ll have to deal with. XML relieves you of some of them to some extent, in expense of creating other complications. All of the data formats are just tools, and how well they’re used inescapably depends on the one who wields them. > Quoting Dare Obasanjo: > ====== > You XML folks completely miss the point. JSON is important > because it is better supported in the browser than XML. That's > why it has taken hold. Arguing that angle brackets, > S-expressions and JSON syntax are all semantically equivalent > is the height of architecture astronautics[0] and COMPLETELY > misses the point. Dare conveniently overlooks that it’s the JSON proponents themselves who tout syntactic brevity as a major win for JSON. > Quoting Masklinn: > ====== > Wow. David, have you at no point considered refactoring your > Lisp and JSON markups? They're horrible, you're piling crap on > crap it's idiotic. Yeah, mixed content is inconvenient and alien if you’re a programmer and all the world’s a data structure… in which case it’s not surprising that you think trying to map out a mixed content model in data structures is crap piled on crap and looks idiotic. Of course if the data is rigidly structured, it *is* idiotic… > Quoting rektide: > ====== > the DOM data structure used by XML is immensely frustrating to > deal with. multiple text node children hanging off elements, > content validation before you can .InnerText, theres just a > never ending rigamaroll to do what is ultimately a very very > simple task, and i for one do not enjoy writing such verbose > kludgtastic code. the data structures built by xml tooling just > suck horrendous ass. So use XPath. DOM blows chunks and a bunch of other things. > But I think his point is that all the formats have benefits and > XML may have some benefits as complexities grow which I won't > debate. As I said, if you have a rigid data structure, XML isn’t a good fit. If you have regular tabular data, even JSON is a lesser choice than CSV. (And lord almighty did people get obsessed with replacing perfectly good CSV interfaces with XML.) I’m certainly not one to champion XML über alles. (Look back in the list’s archives a while ago – the Megginson post I linked to was a reaction to a blog spat that Eliotte Rusty Harold more or less started by carrying an argument over to his weblog that started on this list where it was mainly between me and him. In which I was in the role of JSON defender…) > OTOH, I like to minimize complexity in web services as much as > possible. Things should be made as simple as possible, but no simpler. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
On Dec 25, 2007 7:09 AM, A. Pagaltzis <pagaltzis@...> wrote: > An example of what I was thinking of is when a client PUTs a > representation that cannot, due to the format used, describe the > entire state of the resource. In that case, the server has to > fill in the rest of the state somehow, and it may or may not use > previous resource state to do so. > > Also, there may be aspects of resource state that the server does > not allow clients to modify, ever. An obvious example of the > latter would be the app:edited element in an Entry stored by an > Atompub server. But it's easy to imagine a variation of this > element that contains not a datetime, but an edit counter. The > new value of this counter after a PUT would then obviously be > based on its previous value and would not be based on anything > that the client included in its request. > > So there are various legitimate ways in which new resource state > may derive from previous resource state. So how would you define a partial update to distinguish it from a full update? And when would you force clients to use PATCH instead of PUT? -- Assaf http://labnotes.org > > However, in no such scenario are the semantics of PUT affected. > > Regards, > -- > Aristotle Pagaltzis // <http://plasmasturm.org/>
Forwarding to the list because I hit reply instead of reply all. Sigh... ---------- Forwarded message ---------- From: Daniel Yokomizo <daniel.yokomizo@...> Date: Dec 28, 2007 4:25 PM Subject: Re: [rest-discuss] Idempotency of PUT when using IF-MATCH To: Julian Reschke <julian.reschke@...> On Dec 28, 2007 3:58 PM, Julian Reschke <julian.reschke@...> wrote: > Daniel Yokomizo wrote: > > I wasn't entirely clear. Arguably all the versions of the entity are > > current, as the user may get older versions and at a future date > > branch on old versions. Instead of creating a protocol to handle > > That doesn't make them "current". They may be current for other URLs. > > Or do you want to serve those older versions from the same URL? In that > case it would be correct, and it would be a case of content negotiation > (incl Vary header etc). > > > versioning above HTTP I thought using the existing versioning support > > would be better. > > There already is a protocol for versioning over HTTP (RFC3253), so you > don't need to invent anything new. > > > > So yes, PUT isn't idempotent when used as a conditional request. > > > > Where is this specified? Or is a known convention? > > > > In this situation the server knows that this is a repeated submission > > and it's semantically safe to ignore it. I could just drop IF-MATCH in > > I don't see why it would be safe to ignore it. If a client puts FOO, > then BAR, then FOO, ignoring the last one will not result in the state > expected. In this situations is PUT <content> IF-MATCH <version>, so the sequence is either PUT FOO IF-MATCH V1, PUT BAR IF-MATCH V2, PUT FOO IF-MATCH V3 or PUT FOO IF-MATCH V1, PUT BAR IF-MATCH V2, PUT FOO IF-MATCH V1 Both sequences are explicit: override the third version with FOO or override first version with FOO. The server can safely see if it's duplicate or not. > > this case and make the client send the version identifier in the > > request body, but I'm not sure the problem would be much different. If > > I ignore IF-MATCH issues the same situation can happen and the server > > can safely ignore the duplicate PUT, but I would still ask the same > > question. > > It seems that the main issue here is ignoring the PUT, just because the > content was sent before. How can you assume this is ok -- maybe the > client intended to overwrite the second version with the first one? I'm using SHA256 to digest the state of the entity, so if the digest is the same there's a really high degree (astronomically high) of confidence that it's the same entity. As the client sends what they believe to be the current version (i.e. IF-MATCH) the server knows if it's the same request or a later one. As the system ensures no collision of version ids (we just reject duplicates, the chance of collision is really small), it's always safe to ignore the duplicate requests. This algorithm is inspired by Git. > BR, Julian Best regards, Daniel Yokomizo.
Forwarding to the list because I hit reply instead of reply all. Sigh again... ---------- Forwarded message ---------- From: Daniel Yokomizo <daniel.yokomizo@...> Date: Dec 28, 2007 9:38 PM Subject: Re: [rest-discuss] Re: Idempotency of PUT when using IF-MATCH To: "Roy T. Fielding" <fielding@...> On Dec 28, 2007 8:56 PM, Roy T. Fielding <fielding@...> wrote: > On Dec 28, 2007, at 8:37 AM, Daniel Yokomizo wrote: > > On Dec 28, 2007 4:08 PM, Aristotle Pagaltzis <pagaltzis@...> wrote: > > > * Daniel Yokomizo <daniel.yokomizo@...> [2007-12-28 14:30]: > > > > 1. Client A sents PUT ... (IF-MATCH: "XXX") ... FOO ... > > > > 2. Server process the request and sends the new ETag: YYY > > > > 3. Client B sents PUT ... (IF-MATCH: "YYY") ... BAR > > > > 4. Server process the request and sends the new ETag: ZZZ > > > > 5. Client A retries the first PUT (network problems, double > > submit, it > > > > doesn't matter) sends ETag: YYY > > > > > > I don't understand this sequence. You say the client sends ETag > > > XXX in step #1, but when it retries the request in step #5, it > > > sends ETag YYY. Did you make a typo somewhere or did you omit > > > things from your timeline? > > > > Ops, it's a typo indeed. It should be ETag XXX. I hope this clears > > remaining confusion for everyone. It's supposed to be a simple retry > > of the first PUT, (e.g. client A > > never saw the response and want to ensure everything is fine). In the > > meantime Client B saw A's changes (because he has the unforgeable YYY > > ETag) and submits it's own changes. > > > > In that case, client A definitely wants to receive the 412, since > otherwise it will never know about client B's changes. I see, but doesn't a 412 imply that the operation failed? The trick part is that client A don't know if request was processed (e.g. connection issues) and as PUT is idempotent it automatically retries. If it gets a 412 it will assume the request was never processed in the first place, when it was. The only way I see to let it know the request was processed is returning a 20X, but it won't be able to know that B processed it later. We can return "Cache-Control: must-revalidate" to indicate the content is stale. This situation worries me because the server knows the request is a duplicate submission, it knows the initial submission worked and there are no correct usage that would issue this twice if it was not to for PUT's idempotency. Rephrasing my original question, is it against the spec to return a 20X (with "Cache-Control: must-revalidate" if other submissions already happened), instead of 412? Is this usage of "Vary: If-Match" and If-Match unsound? > A more interesting question is whether the spec over-constrains the > case where a PUT is successful but tried again. In other words, should > the server be allowed to accept the PUT if the etag differs but the > current state matches what is being PUT? Subversion handles such > cases nicely because it is common for two developers to patch the > same bugs. I think the "MUST respond with 412" is yet another case > of a bogus requirement being added in 2616. > > Note: this is an HTTP spec issue, not a REST issue. > > ....Roy Best regards, Daniel Yokomizo.
A. Pagaltzis wrote: > ... > An example of what I was thinking of is when a client PUTs a > representation that cannot, due to the format used, describe the > entire state of the resource. In that case, the server has to > fill in the rest of the state somehow, and it may or may not use > previous resource state to do so. > ... I would argue that this is a bad use of PUT. If the representation you send does not describe the entire state of the resource, you shouldn't be using, or that format. Taking some information from the previous state IMHO is incorrect. It's probably time for RFC2616bis to clarify this. BR, Julian
Daniel Yokomizo wrote: > > In that case, client A definitely wants to receive the 412, since > > otherwise it will never know about client B's changes. > > I see, but doesn't a 412 imply that the operation failed? The trick It implies that the request was rejected because the conditions evaluated to false. > part is that client A don't know if request was processed (e.g. > connection issues) and as PUT is idempotent it automatically retries. > If it gets a 412 it will assume the request was never processed in the > first place, when it was. The only way I see to let it know the Why would it assume that? Just because a retry returns 412 doesn't mean the original request didn't succeed. > request was processed is returning a 20X, but it won't be able to know > that B processed it later. We can return "Cache-Control: > must-revalidate" to indicate the content is stale. It seems that you're looking for a way for the server to indicate in a response to a retry that it's aware that this is a retry, and the original request succeeded. I don't believe you can do that without extensions. > This situation worries me because the server knows the request is a > duplicate submission, it knows the initial submission worked and there > are no correct usage that would issue this twice if it was not to for > PUT's idempotency. > > Rephrasing my original question, is it against the spec to return a > 20X (with "Cache-Control: must-revalidate" if other submissions > already happened), instead of 412? Is this usage of "Vary: If-Match" > and If-Match unsound? Yes, I think this is incorrect. > ... BR, Julian
Roy T. Fielding wrote: > ... > Note: this is an HTTP spec issue, not a REST issue. > ... Right. It would be really if, now that we actually have a new HTTP working group, people would go there if they think RFC2616 is incorrect or unclear somewhere. BR, Julian
Mike Schinkel wrote: > > IOW, RegEx and XSLT let me consider things to be nails, but I'd really like > to have tools that are better designed for things that are not nails. > But, if you *are* dealing with nails, isn't the appropriate tool a hammer? What I mean is, Drupal is a system for managing HTML documents, right? So, what is the point of using a SQL database for this task, as opposed to say, an XML database which *is* designed to handle marked-up documents? If you insist that documents must be derived from SQL databases, then yes, perhaps JSON is a better alternative. But, I see collections of marked-up documents as an ideal use of XML, specifically Atom. We aren't talking about some theoretical problem here, we're talking about documents and discussion threads and such, which model a lot better as Atom than they do as SQL, IMHO. > >> Any time I can use an engine to complete a task, I'll do that >> before I start to write script or code. > > Ditto, but... > >> And right now the >> XML family offers a great number of useful engines. And I can >> count on consistent performance on any platform and almost >> any language. > > ...they must be deployed and JSON parsing directly to objects is deployed by > default in browsers. > Granted, JavaScript is deployed in more browsers than XML, but I wouldn't go so far as to call it a default. I would go so far as to say that, where XML has been deployed in browsers, it is much more consistent cross-platform than JavaScript. Developing and debugging AJAX applications cross-browser is 100 times more of a chore than XSLT apps. > > There are not opt-in when a web service that I need to consume requires me > to use them. That's why I said my opinion was that XML with all its baggage > works best when you control the development of both the client and the > server. > A web service which requires me to use XSLT is using standards that anyone can look up and implement (if needed), whereas a service feeding me JSON is requiring me to opt-in to a code library to understand how objects are converted into markup, and understand the client-by-client implementation differences. Such a service would have a much steeper learning curve and greater maintenance hassles, than a system which accomplishes the same thing using Atom + XSLT. Aristotle Pagaltzis wrote: > > (well, unless you serve transformations to clients as a sort of code-on- > demand approach, I suppose – a rather hypothetical scenario) > Nothing hypothetical about it, see the link below... Isn't that a much lighter-weight solution? Opera is on the verge (9.5 is in beta) of adding the document() function, making all the major browsers XSLT- compatible (even if Microsoft doesn't follow the standards, quite). Which means a server can send Atom to browsers, with an XML PI to fire off a client-side XSLT transformation, which only needs to be coded once as it is only a couple of lines different from the identical XSLT code on the server, which can generate HTML for clients that don't have XSLT. With this JSON/AJAX approach, the fallback for browsers with no JavaScript would be what, PHP? So that's PHP and JavaScript being used to accomplish the transformation task XSLT was designed for, except with twice the labor overhead. Using Atom + XSLT, the fallback for clients that don't grok XSLT is Atom, much more likely to be understood as a raw document than JSON. OK, OK, I'll post an URL to illustrate, but please bear in mind this is alpha code and needs a major rewrite, especially conneg. For now, all versions of Safari and Firefox are treated as XSLT clients, and all versions of Opera are treated as non-XSLT clients. Which is why I use cookies as part of conneg, so you can override conneg using an (optional) parameter on the URL and bookmark individual variants... Anyway, give this a try: http://ericjbowman.com/2006/aug/09/11;view=xslt The output from WordPress is all Atom and is stored in an XML database, MySQL is still there to make the admin interface work, etc. If your browser supports XSLT, all data transfers could be Atom documents (although our setup uses an extra step, the gist is Atom --> XSLT --> (X)HTML). If not, the same XSLT code just runs on the server. So, content negotiation aside, all that's really needed in order to enable a server-side XSLT approach to work client-side on modern Web browsers is to add one line of code, an XML PI, to the Atom output. Does it get any simpler? Go XML! My point is to keep an open mind about what browsers can do these days. JavaScript is no longer the only viable alternative for client-side coding. I guess my other point, is that PHP apps don't really need to know how to handle XML, to generate XML. -Eric
On Dec 29, 2007, at 12:00 AM, Mike Schinkel wrote: > Basically, please don't get too ideologically worked up by make > supposition; > I was primarily trying to say that XML can both enforce rigidity where > flexibility is a virtue and can also be a very heavy option when a > lightweight solution is preferred. Fair enough -- I wasn't suggesting that XML be used for everything, as I agree there are lots of cases where it would be overkill. I would still claim that nothing can rival the XML ecosystem of tools and libraries and standards, except for the Web itself ;-) Stefan -- Stefan Tilkov, http://www.innoq.com/blog/st/
Berend de Boer wrote: > > Mike> Considered from the perspective of the Robustness principle > Mike> a.ka. Postel's Law [2], use of (especially validated) XML is > Mike> less than ideal because it puts a burden of conservatism on > Mike> others. > > This is the most horrible law ever invented and it allows > every crappy programmer to write garbage that others simply > have to accept. > > And you know why crappy programmers don't detect they emit garbage? > Because of all those other crappy programmers who, believing > Postel's law, happily accept garbage and try to make sense of it. With that idealistic view of what IMO is one of the most important principles to ensure robustness on the Internet, I guess we don't have anything to discuss. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
Eric J. Bowman wrote > But, if you *are* dealing with nails, isn't the appropriate > tool a hammer? Yes, but when did I say we are dealing with nails? > What I mean is, Drupal is a system for managing HTML > documents, right? No, not exactly. It is a system for managing content (vs. just documents), and in the broader sense it is a framework for writing web apps. Pigeonholing it as a system for managing HTML documents is doing it and its community a huge disservice. > So, what is the point of using a SQL > database for this task, as opposed to say, an XML database > which *is* designed to handle marked-up documents? Well, uh, because Drupal uses MySQL to store its content? Besides, I once tried for about six months to use XML to store documents and it turned out to be the biggest nightmare project I ever undertook, and it failed miserably. So why would I want to use an XML database? > If you > insist that documents must be derived from SQL databases, > then yes, perhaps JSON is a better alternative. If *I* insist? Why not learn something about the open-source project before assigning me the responsibility of having selected the backend. (Even so, I would not these days ever choose an XML database...) > But, I see > collections of marked-up documents as an ideal use of XML, > specifically Atom. We aren't talking about some theoretical > problem here, we're talking about documents and discussion > threads and such, which model a lot better as Atom than they > do as SQL, IMHO. In that case, why not go over to http://drupal.org and start a discussion thread there telling them how everyone in the comminity has been getting it wrong all this time and that everyone using Drupal should immediately stop what they are doing and port to an XML database? Sorry, your assertion was just so over-the-top I couldn't resist the sarcasm. > > ...they must be deployed and JSON parsing directly to objects is > > deployed by default in browsers. > > > > Granted, JavaScript is deployed in more browsers than XML, > but I wouldn't go so far as to call it a default. You are splitting hairs. > I would go > so far as to say that, where XML has been deployed in > browsers, it is much more consistent cross-platform than > JavaScript. I don't have the expertise to confirm or deny, but the point was that "When comparing ... it seems clear that the XML version takes more code ( in the browser than when using JSON) and requires a layer of mental indirection as the developer has to be knowledgeable about XML APIs and their idiosyncracies." This is just quoting Dare Obasanjo: http://www.25hoursaday.com/weblog/PermaLink.aspx?guid=39842a17-781a-45c8-ade 5-58286909226b > Developing and debugging AJAX applications > cross-browser is 100 times more of a chore than XSLT apps. I strongly disagree with that assertion. I worked on a failed project I previously referenced that used XSLT, and as it grew it became more and more fragile to the point of being completely unworkable. The side effects that can results from an XSLT transform and their unintended consequents are legendary. > A web service which requires me to use XSLT is using > standards that anyone can look up and implement (if needed), True XSLT is a standard, but just because it made it's way to a standard doesn't mean it was a good idea. Group-think is, has always been, and will always be alive in standard's processes. > whereas a service feeding me JSON is requiring me to opt-in > to a code library to understand how objects are converted > into markup, and understand the client-by-client > implementation differences. Using XSLT requires one to opt-in to a XSLT library, and I know from experience the pain of experience the differences there are between those. The things you can't do is XSLT that Microsoft's version allows are really pathetic. I'd quote them if I hadn't blocked most of my XSLT knowledge from memory. Transforming JSON is trivial: http://ajaxian.com/archives/transforming-json JSON is just not as mature as XML, but that's rapidly changing. By the same token Windows NT Server is more mature than Windows 2003 Server, but you don't see too many people choosing the former over the latter. > Such a service would have a much > steeper learning curve and greater maintenance hassles, than > a system which accomplishes the same thing using Atom + XSLT. That is a non-sequitur. I've already mentioned my painful experience that any XSLT app which is anything larger than trivial is a maintenance nightmare. By its very nature XSLT is fragile and effectively impossible to test because of XSLT's side-effecty nature. Conversely you can unit test Javascript (or any number of other similar languages) and ensure that a unit of code works as intended and always will. But the irony of this argument is I don't really need to make it. A large number of developers are using JSON/Javascript instead of XML/XSLT for the very reasons I mention; it is just a whole lot easier to use JSON than XML in the browser. That may frustrate those enamored with XML/XSLT, but developers are voting with their feet and all those advocating XML can't stop it. My guess is someone will eventually create an Atom-to-JSON convertor, and then someone else will create an Atom equivalent in JSON and who knows where that might take us. Oh wait, it's already happening! http://www.google.com/search?q=json+atom :-) Listen, the reason I'm arguing strongly for JSON and against XML is because of the cargo-cultist mentality regarding XML that I seem to have unearthed here. I'm not really anti-XML (though I am anti XSLT and anti- for overdoing namespaces, especially as namespaces identified by non-deferenceable URIs), I'm just seeing that is many times easier to work with JSON in the browser. The fact you have to defend XML tells me though doth protest too much. > Isn't that a much lighter-weight solution? Opera is on the > verge (9.5 is in > beta) of adding the document() function, making all the major > browsers XSLT- compatible (even if Microsoft doesn't follow > the standards, quite). Which means a server can send Atom to > browsers, with an XML PI to fire off a client-side XSLT > transformation, which only needs to be coded once as it is > only a couple of lines different from the identical XSLT code > on the server, which can generate HTML for clients that don't > have XSLT. > > With this JSON/AJAX approach, the fallback for browsers with > no JavaScript would be what, PHP? Sure. But it could also be done in Python, Ruby, Perl, Lisp, Java, C#, VB.NET, VBScript... As you say, it only needs to be coded once (per language.) > So that's PHP and > JavaScript being used to accomplish the transformation task And the problem is? > XSLT was designed for, You forgot to insert "poorly" between "was" and "designed"... '-) >> except with twice the labor overhead. How is it twice the overhead? Besides, groking XSLT takes an order of magnitude more mental effort for most people. When are all the really smart people who can understand what most people can't going to learn that simplistic and easy keeps winning the war on the web over advanced and difficult? It took a really long time for me to accept that myself, but now that I've seen the light, well as they say there are none more zealous than the recently converted. > Using Atom + XSLT, the fallback for clients that don't grok > XSLT is Atom, much more likely to be understood as a raw > document than JSON. How is it more likely to be understood? JSON has fewer non-data syntax characters than XML that would otherwise confuse the uninitiated. > OK, OK, I'll post an URL to illustrate, but please bear in > mind this is alpha code and needs a major rewrite, especially > conneg. For now, all versions of Safari and Firefox are > treated as XSLT clients, and all versions of Opera are > treated as non-XSLT clients. Which is why I use cookies as > part of conneg, so you can override conneg using an > (optional) parameter on the URL and bookmark individual variants... > > Anyway, give this a try: > > http://ericjbowman.com/2006/aug/09/11;view=xslt NOW I see why you are arguing to strenously for XML & XSLT. If you accepted that I was right about JSON you'd have to accept that all your efforts on our XML+XSLT project were for naught. Knowing human nature's need to justify its decisions, I really probably should not be arguing this with you because even if I'm right your efforts on XSLT would make it hard for you to admit it. > The output from WordPress is all Atom and is stored in an XML > database, MySQL is still there to make the admin interface > work, etc. If your browser supports XSLT, all data transfers > could be Atom documents (although our setup uses an extra > step, the gist is Atom --> XSLT --> (X)HTML). If not, the > same XSLT code just runs on the server. So, content > negotiation aside, all that's really needed in order to > enable a server-side XSLT approach to work client-side on > modern Web browsers is to add one line of code, an XML PI, to > the Atom output. Does it get any simpler? Go XML! Frankly, I'd be horrified if that were every to take off on a broad scale. Fortunately, I'm pretty sure it won't; at least not the XSLT part; XSLT is just too hard to learn for the average Joe. Honestly speaking, aside from being a magnificent example of doing it because it can be done, what tangible benefits does this really provide that were not already available on vanilla WordPress? > My point is to keep an open mind about what browsers can do > these days. JavaScript is no longer the only viable alternative > for client-side coding. Are you really seriously suggesting to build a full-working web app with XSLT and foresaking all JavaScript? Aside from not seeing how it will work, the thought of having to code again in XSLT gives me the cold chills... > I guess my other point, is that PHP apps > don't really need to know how to handle XML, to generate XML. True, but that doesn't help the client consume it. But hey, if you are wedded to XML+XLST and you have invested tons of time into a project using them then more power to you; don't let my distaste for the tools you used get you down. Seriously; I'm just one person trying to get something done. You don't have to convert me to your religion just as I don't need to convert you to mine. As for the broader web, come what may as I doubt any potential debates we might have could affect that outcome no matter what are respective religons be. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
Aristotle Pagaltzis wrote: > > You make a falicitous assumption that the 'public' wanted > to move to > > XHTML when (IMO) it was only a tiny percentage that really > wanted to > > move. > > I never said the public wanted to. I said some of them have, > which is what you say; the rest didnt care. They (or at > least, a significant portion of them) would have moved, had > there been any tangible payoff had XHTML offered features > that they didnt already have. But it doesnt; only > programmers with XML tools at their disposal had a reason to > move, whereas everyone else had lots of reason not to. I'll agree that XHTML minmally needed to offer features they did not already have but I'd say it is an untestable conclusion that people would have moved had it had more features. We can't know what effect XHTML's strictness would have had on uptake, which I believe would have had a significant damping effect. But even my conclusion is untestable, so we might as well agree to be confined to our own opinions on this one. > > And I'll argue that having the browser support it strictly (and > > strictness is what I'm arguing against) would have resulting in too > > many people seeing the web as broken which an important contingent > > (that mainly includes Microsoft) simply won't let that happen. > > I think MSFT of all entities would be more than happy to let > the web appear broken. If you think the web is broken, > Redmond has a Silverlight engine to sell you Heh. You weren't in the 1.5 hour meeting I had with Chris Wilson where I begged him to consider certain features for HTML5 where his repeated refrain to me was: "No can do. It will cause the web to break for a segment of users and if it even affects as many as 0.1% of users then its a non-starter because 0.1% of users would represent such a large number of actual users." No, I think 'Microsoft' *really* wants to ensure the web doesn't break, to a fault. > > > (Of course, just because you're using XML doesn't > automatically mean > > > you'll end up with a well thought-out vocabulary. Most > ad-hoc vocabs > > > are mediocre or worse. That's hardly XML's fault.) > > > > Is that the "Guns don't kill people, people do" argument? > > Well just because you use JSON doesnt mean youll end up > with a well-designed data structure either. Too little or too > much nesting, extensibility provisions and the like are all > concerns youll have to deal with. Granted. > All of the data formats are just tools, and how well theyre > used inescapably depends on the one who wields them. That makes the assumption that all tools are equally capable and that all tools equally address all requirements, which clearly isnt true. Each tool works best in its optimized context and in the case of JSON vs. XML the former works better in part because it doesn't present as many options one has to deal with. Dereferncing a Javascript object is much more consistent than accessing data in the DOM or via Xpath. > > Quoting Dare Obasanjo: > > ====== > > You XML folks completely miss the point. JSON is important > because it > > is better supported in the browser than XML. That's why it > has taken > > hold. Arguing that angle brackets, S-expressions and JSON > syntax are > > all semantically equivalent is the height of architecture > > astronautics[0] and COMPLETELY misses the point. > > Dare conveniently overlooks that its the JSON proponents > themselves who tout syntactic brevity as a major win for JSON. No, Dare addresses that saying JSON proponents who tout syntax are overshadowing what he believes to be more significant benefits of JSON. FYI, I think they are equally significant. > > Quoting Masklinn: > > ====== > > Wow. David, have you at no point considered refactoring > your Lisp and > > JSON markups? They're horrible, you're piling crap on crap it's > > idiotic. > > Yeah, mixed content is inconvenient and alien if youre a > programmer and all the worlds a data structure in which > case its not surprising that you think trying to map out a > mixed content model in data structures is crap piled on crap > and looks idiotic. > > Of course if the data is rigidly structured, it *is* idiotic Sorry, after re-reading that three times I still have totally no idea what point you were trying to make... > > Quoting rektide: > > ====== > > the DOM data structure used by XML is immensely frustrating to deal > > with. multiple text node children hanging off elements, content > > validation before you can .InnerText, theres just a never ending > > rigamaroll to do what is ultimately a very very simple > task, and i for > > one do not enjoy writing such verbose kludgtastic code. the data > > structures built by xml tooling just suck horrendous ass. > > So use XPath. DOM blows chunks and a bunch of other things. XPath still has ideosyncracies that Javascript objects just don't have. > > But I think his point is that all the formats have benefits and XML > > may have some benefits as complexities grow which I won't debate. > > As I said, if you have a rigid data structure, XML isnt a > good fit. Funny, I think that is the only place where it is a reasonably good fit. > (And lord almighty did people get > obsessed with replacing perfectly good CSV interfaces with XML.) Frankly, I've always hated CSV; I'll take XML over CSV anyday. The only reasonably reliable 'CSV' was tab-delimited and tabs are too easy to munge in a text editors. With quotes and commas, having a comma or a quote in your data totally whacks the file. I only wish that Excel has come up with a better solution than CSV. > If you have regular tabular data, even JSON is a > lesser choice than CSV. How so? If I serve CSV to AJAX I have to write a CSV parser, and there's the quotes and commas problem. If I trust the server all I have to do with JSON is eval(). > Im certainly not one to champion XML ber alles. (Look back > in the lists archives a while ago the Megginson post I > linked to was a reaction to a blog spat that Eliotte Rusty > Harold more or less started by carrying an argument over to > his weblog that started on this list where it was mainly > between me and him. In which I was in the role of JSON defender) Yeah, but it's easy to get into an argument with ERH... :-0 > > OTOH, I like to minimize complexity in web services as much as > > possible. > > Things should be made as simple as possible, but no simpler. Hence why I said "as much as *possible*." ;-) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
* Mike Schinkel <mikeschinkel@...> [2007-12-29 16:30]: > Aristotle Pagaltzis wrote: >>* Mike Schinkel <mikeschinkel@...> [2007-12-29 01:35]: >>> And I'll argue that having the browser support it strictly >>> (and strictness is what I'm arguing against) would have >>> resulting in too many people seeing the web as broken which >>> an important contingent (that mainly includes Microsoft) >>> simply won't let that happen. >> >> I think MSFT of all entities would be more than happy to let >> the web appear broken. If you think the web is broken, Redmond >> has a Silverlight engine to sell you… > > Heh. You weren't in the 1.5 hour meeting I had with Chris > Wilson where I begged him to consider certain features for > HTML5 where his repeated refrain to me was: "No can do. It will > cause the web to break for a segment of users and if it even > affects as many as 0.1% of users then it’s a non-starter > because 0.1% of users would represent such a large number of > actual users." A MSFT rep staunchly refused something that would bring progress to the open web? Colour me an entire rainbow of surprised. Really! I would never have thought. (Sure, they have good reasons to take that position… sure is convenient that it so happens to conflict with progress, though.) But how is that relevant? In what possible way would supporting draconically parsed XHTML have broken existing non-XML sites? Refusing to break existing clients’ stuff relevant is not at all incompatible with an interest in preventing the open web from evolving in a direction where it works as well or better than any proprietary MSFT platform. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Mike Schinkel wrote: > Besides, I once tried for about six months to use XML to store documents and > it turned out to be the biggest nightmare project I ever undertook, and it > failed miserably. So why would I want to use an XML database? Because the project would no longer be a nightmare? >> If you >> insist that documents must be derived from SQL databases, >> then yes, perhaps JSON is a better alternative. > > If *I* insist? Why not learn something about the open-source project before > assigning me the responsibility of having selected the backend. (Even so, I > would not these days ever choose an XML database...) Many, many projects have used SQL databases for this because there were no reasonable alternatives. That is changing. Needing database services does not imply that one needs to cut everything into fields or store unstructured BLOBs. These are the compromises we have made because we had no better alternatives. Now, however, better alternatives are appearing by the month. Three or even two years ago, the right choice for a new web system was still a SQL database. However in many cases that's not true any more, and it's less and less true by the month as XML databases improve. The legacy systems like WordPress and Drupal will be with us for a long time of course. When they were invented they made the right choices for backends. But were a similar product invented today, the right choice of backend would be very different. >> Granted, JavaScript is deployed in more browsers than XML, >> but I wouldn't go so far as to call it a default. I'd challenge that. Off the top of my head I can't think of a single current browser that supports JavaScript but not XML. I also know many people who turn off JavaScript (perhaps on a site-by-site basis using the Firefox NoScript extension). I don't know anybody who turns off XML. I don't think you even can, and nobody's asked to be able to do that. The X in XSS does not stand for XML. > I don't have the expertise to confirm or deny, but the point was that "When > comparing ... it seems clear that the XML version takes more code ( in the > browser than when using JSON) and requires a layer of mental indirection as > the developer has to be knowledgeable about XML APIs and their > idiosyncracies." This is just quoting Dare Obasanjo: > > http://www.25hoursaday.com/weblog/PermaLink.aspx?guid=39842a17-781a-45c8-ade > 5-58286909226b None of that's relevant if you're using XSLT. There are exactly three annoying cross-browser problems you have to deal with in XSLT: white space handling, the document() function, and MIME types. All three are well understood and easily dealt with. Would you even want to attempt to enumerate the cross-browser issues in JavaScript? >> Developing and debugging AJAX applications >> cross-browser is 100 times more of a chore than XSLT apps. > > I strongly disagree with that assertion. I worked on a failed project I > previously referenced that used XSLT, and as it grew it became more and more > fragile to the point of being completely unworkable. The side effects that > can results from an XSLT transform and their unintended consequents are > legendary. I call bullshit on that. That is so far from my experience I simply don't believe you. > Using XSLT requires one to opt-in to a XSLT library, and I know from > experience the pain of experience the differences there are between those. > The things you can't do is XSLT that Microsoft's version allows are really > pathetic. I'd quote them if I hadn't blocked most of my XSLT knowledge from > memory. Ah, now I understand. You never actually learned XSLT. You learned the Microsoft pseudo-XSLT. I'm afraid Microsoft did some very nasty things in the early days of IE 5 and XML. They have now mostly recanted those. Microsoft was the one marching out of step with the band, not everybody else. You are, I am afraid, yet another victim of Microsoft disinformation. > That is a non-sequitur. I've already mentioned my painful experience that > any XSLT app which is anything larger than trivial is a maintenance > nightmare. By its very nature XSLT is fragile and effectively impossible to > test because of XSLT's side-effecty nature. Conversely you can unit test > Javascript (or any number of other similar languages) and ensure that a unit > of code works as intended and always will. XSLT is a pure functional language with no side effects. Whatever language you were working with it certainly doesn't sound like XSLT. It sounds like someone fed you a haggis for Christmas dinner, called it a turkey, and consequently you now think we're all crazy because we say we love turkey. > But the irony of this argument is I don't really need to make it. A large > number of developers are using JSON/Javascript instead of XML/XSLT for the > very reasons I mention; it is just a whole lot easier to use JSON than XML > in the browser. That may frustrate those enamored with XML/XSLT, but > developers are voting with their feet and all those advocating XML can't > stop it. Actually no. I know not one single person (including you) who has chosen JavaScript/JSON over XML/XSLT. I do know many people who have chosen JavaScript/JSON over JavaScript/DOM but that's a very different story. > Listen, the reason I'm arguing strongly for JSON and against XML is because > of the cargo-cultist mentality regarding XML that I seem to have unearthed > here. I'm not really anti-XML (though I am anti XSLT and anti- for overdoing > namespaces, especially as namespaces identified by non-deferenceable URIs), > I'm just seeing that is many times easier to work with JSON in the browser. > The fact you have to defend XML tells me though doth protest too much. You're not even ant-XSLT. You just think you are. I don't believe you've ever worked with real XSLT. What you experienced was not XSLT. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Aristotle Pagaltzis wrote: > A MSFT rep staunchly refused something that would bring > progress to the open web? Colour me an entire rainbow of surprised. > Really! I would never have thought. (Sure, they have good > reasons to take that position. sure is convenient that it so > happens to conflict with progress, though.) As I discussed the ideas with him I came to reluctanty agree with his points. Cynicism really only reflects negatively on the cynic. > But how is that relevant? In what possible way would > supporting draconically parsed XHTML have broken existing > non-XML sites Apples and oranges. We were not talking about XHTML we were talking about other things, and frankly I don't remember the specifics as it was probably 6 months ago. > Refusing to break existing clients' stuff relevant is not at > all incompatible with an interest in preventing the open web > from evolving in a direction where it works as well or better > than any proprietary MSFT platform. Like I implied, I don't immediately presume evil intentions but as a company MSFT is a fudiciary to its shareholders, not some intangible "for the greater good." Do like the man you got kicked by the jackass; consider the source and go about your business. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
Elliotte Rusty Harold wrote: > > Besides, I once tried for about six months to use XML to store > > documents and it turned out to be the biggest nightmare > project I ever > > undertook, and it failed miserably. So why would I want to > use an XML database? > > Because the project would no longer be a nightmare? It was the XSLT that cause it to be a nightmare, not how it was stored. How can adding another XML layer fix the nightmare that was XSLT? > > If *I* insist? Why not learn something about the > open-source project > > before assigning me the responsibility of having selected > the backend. > > (Even so, I would not these days ever choose an XML database...) > > Many, many projects have used SQL databases for this because > there were no reasonable alternatives. That is changing. > Needing database services does not imply that one needs to > cut everything into fields or store unstructured BLOBs. These > are the compromises we have made because we had no better > alternatives. I used to believe that. Now I'm not so sure... > Now, however, better alternatives are appearing by the month. > Three or even two years ago, the right choice for a new web > system was still a SQL database. However in many cases that's > not true any more, and it's less and less true by the month > as XML databases improve. So pray tell what are some of these "better alternatives" that you speak of? > The legacy systems like WordPress and Drupal will be with us > for a long time of course. Agreed. But it isn't helpful to advocate someone using WordPress or Drupal to use an XML Database when what they need to learn if the best way to provide a RESTful interface for the existing system. Some people have real work to do. > When they were invented they made > the right choices for backends. But were a similar product > invented today, the right choice of backend would be very different. Such as? > > >> Granted, JavaScript is deployed in more browsers than XML, but I > >> wouldn't go so far as to call it a default. > > I'd challenge that. Off the top of my head I can't think of a > single current browser that supports JavaScript but not XML. > I also know many people who turn off JavaScript (perhaps on a > site-by-site basis using the Firefox NoScript extension). I > don't know anybody who turns off XML. > I don't think you even can, and nobody's asked to be able > to do that. > The X in XSS does not stand for XML. Again, you can argue for XML, or you can just get work done with JSON. > None of that's relevant if you're using XSLT. There are > exactly three annoying cross-browser problems you have to > deal with in XSLT: white space handling, the document() > function, and MIME types. All three are well understood and > easily dealt with. Are you advocating writing a rich-client app in XSLT complete with all the functionality available in, say, Gmail? > Would you even want to attempt to > enumerate the cross-browser issues in JavaScript? No need, I just use jQuery > > >> Developing and debugging AJAX applications cross-browser > is 100 times > >> more of a chore than XSLT apps. > > > > I strongly disagree with that assertion. I worked on a > failed project > > I previously referenced that used XSLT, and as it grew it > became more > > and more fragile to the point of being completely > unworkable. The side > > effects that can results from an XSLT transform and their > unintended > > consequents are legendary. > > I call bullshit on that. That is so far from my experience I > simply don't believe you. Are you saying I didn't work on a failed XSLT transform project? I can most certainly guarantee I did. Do you want me to send you the files to provde it?!?!? I was using XML for marking up documents and XLST for tranforming them for publishing for http://www.howtoselectguides.com which is now dormant because of our use of XML & XSLT. We painted ourself into a corner and couldn't dig our way out. We lost momentum and I just had to gave up on it. One day I might revisit, but it will be with SQL and PHP and not with XML & XSLT. That said, maybe you are better than me and can somehow overcome the design limitations of XSLT. I'm not that superhuman. > Ah, now I understand. You never actually learned XSLT. You > learned the Microsoft pseudo-XSLT. I'm afraid Microsoft did > some very nasty things in the early days of IE 5 and XML. No it wasn't IE5 days; it was 2005. I learned XSLT 1.0, and I have about 6 XSLT books on my bookshelf to prove it including Michael Kay's. You are certainly familiar with the "Expression must evaluate to a node-set" issue, no? Microsoft's msxsl:node-set() resolves it, but it isn't standards-based XSLT as standards-based XSLT is brain-dead: http://www.mikeschinkel.com/blog/gettingpastthexslterrorexpressionmustevalua tetoanodeset/ > They have now mostly recanted those. > Microsoft was the one marching out of step with the band, not > everybody else. You are, I am afraid, yet another victim of > Microsoft disinformation. HARDLY. I wrote vanilla XSLT 1.0 and only used MS extensions where XSLT 1.0 was woefully inadequate. > XSLT is a pure functional language with no side effects. Pullease. I'd write XSLT for an XML document, and then another XML document that validates to the same schema would trigger differences in the XSLT transform; it would either break or produce output that I wasn't expecting. It was a nightmare to debug those situations and they happened over and over. > Whatever language you were working with it certainly doesn't > sound like XSLT. It sounds like someone fed you a haggis for > Christmas dinner, called it a turkey, and consequently you > now think we're all crazy because we say we love turkey. Your annoying use of condescention given you really have no knowledge of my experience only reflects negatively on you. > > But the irony of this argument is I don't really need to > make it. A > > large number of developers are using JSON/Javascript instead of > > XML/XSLT for the very reasons I mention; it is just a whole > lot easier > > to use JSON than XML in the browser. That may frustrate > those enamored > > with XML/XSLT, but developers are voting with their feet > and all those > > advocating XML can't stop it. > > Actually no. I know not one single person (including you) who > has chosen JavaScript/JSON over XML/XSLT. I do know many > people who have chosen JavaScript/JSON over JavaScript/DOM > but that's a very different story. If you don't know them it's because you don't get out much. Google for JSON vs. XML and you'll find lots. > > Listen, the reason I'm arguing strongly for JSON and against XML is > > because of the cargo-cultist mentality regarding XML that I seem to > > have unearthed here. I'm not really anti-XML (though I am anti XSLT > > and anti- for overdoing namespaces, especially as namespaces > > identified by non-deferenceable URIs), I'm just seeing that > is many times easier to work with JSON in the browser. > > The fact you have to defend XML tells me though doth > protest too much. > > You're not even ant-XSLT. You just think you are. I don't > believe you've ever worked with real XSLT. What you > experienced was not XSLT. And I don't give a damn what you believe. The world doesn't revolve around what you believe Mr. ERH. But for anyone else who cares just ask Mike Gunderloy who worked with me on the project if I've ever used XSLT; he was quite annoyed by my use of it, among other things And here are two more XSLT-related blog posts I wrote from 2004: http://www.mikeschinkel.com/blog/rantingaboutxsltsverbosity/ http://www.mikeschinkel.com/blog/goodxslttutorial/ And run this query: http://www.google.com/search?q=xslt+schinkel You'll find more than enough g*d d*mn evidence that YES I have used XSLT. Thank you very much. ERH, next time do just 2 seconds of research before you accuse someone of not having experience. And take your religion elsewhere; I'm trying to get some work done. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
Mike Schinkel wrote: >> When they were invented they made >> the right choices for backends. But were a similar product >> invented today, the right choice of backend would be very different. > > Such as? > eXist Mark Logic Berkeley dbXML and there are several others out there I haven't tried yet. More seem to come out monthly. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Mike Schinkel wrote: > HARDLY. I wrote vanilla XSLT 1.0 and only used MS extensions where XSLT 1.0 > was woefully inadequate. That you used MS extensions suggest you really didn't get it. That you were working with the MS XSLT in the first place suggests you had problems you have improperly diagnosed, and you're assigning the blame to the wrong cause. >> XSLT is a pure functional language with no side effects. > > Pullease. I'd write XSLT for an XML document, and then another XML document > that validates to the same schema would trigger differences in the XSLT > transform; it would either break or produce output that I wasn't expecting. > It was a nightmare to debug those situations and they happened over and > over. That's not a side effect. That's an indication of bad code. Properly written, XSLT can handle a wide variation of inputs. However testing and debugging is as necessary as in any other software system. It's not magic. >> Whatever language you were working with it certainly doesn't >> sound like XSLT. It sounds like someone fed you a haggis for >> Christmas dinner, called it a turkey, and consequently you >> now think we're all crazy because we say we love turkey. > > Your annoying use of condescention given you really have no knowledge of my > experience only reflects negatively on you. I'm going by what you posted. But what you say continues to strongly suggest that you encountered well known problems as a direct result of use of non-standard Microsoft technologies. > If you don't know them it's because you don't get out much. > Google for JSON vs. XML and you'll find lots. Again,you're misdiagnosing. XSLT is not the issue here. Even XML isn't really the issue. It's DOM. You're seeing symptoms, but you're badly misdiagnosing the causes. > And here are two more XSLT-related blog posts I wrote from 2004: > > http://www.mikeschinkel.com/blog/rantingaboutxsltsverbosity/ > http://www.mikeschinkel.com/blog/goodxslttutorial/ Hmm, the second one seems to include quite a bit of non-standard Microsft XSLT and imperative thinking, and you thought that was a good tutorial? I can see why you had problems. > And run this query: http://www.google.com/search?q=xslt+schinkel You'll > find more than enough g*d d*mn evidence that YES I have used XSLT. Thank > you very much. I scanned a few of those. Looks like to me like you had some of the classic problems of an imperative programmer trying to migrate to a functional language. I don't doubt you had problems, but they simply didn't arise for the reasons you think they did. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
Elliotte Rusty Harold wrote: > > HARDLY. I wrote vanilla XSLT 1.0 and only used MS extensions where > > XSLT 1.0 was woefully inadequate. > > That you used MS extensions suggest you really didn't get it. > That you were working with the MS XSLT in the first place > suggests you had problems you have improperly diagnosed, and > you're assigning the blame to the wrong cause. Yeah well maybe I didn't. But they neither could anyone else on the XLST list give me any alternatives, and that included Michael Kay. Of course, I'm sure you could have done better. > > Pullease. I'd write XSLT for an XML document, and then another XML > > document that validates to the same schema would trigger > differences > > in the XSLT transform; it would either break or produce > output that I wasn't expecting. > > It was a nightmare to debug those situations and they happened over > > and over. > > That's not a side effect. That's an indication of bad code. > Properly written, XSLT can handle a wide variation of inputs. > However testing and debugging is as necessary as in any other > software system. It's not magic. Well I guess I was just unable to "get" the "right" way to do XSLT. And as I tend to be a lot more technically advanced than the average person, even more reason to argue against XSLT for mainstream use. > >> Whatever language you were working with it certainly doesn't sound > >> like XSLT. It sounds like someone fed you a haggis for Christmas > >> dinner, called it a turkey, and consequently you now think > we're all > >> crazy because we say we love turkey. > > > > Your annoying use of condescention given you really have no > knowledge > > of my experience only reflects negatively on you. > > I'm going by what you posted. But what you say continues to > strongly suggest that you encountered well known problems as > a direct result of use of non-standard Microsoft technologies. As I've said repeatedly, I did my best to stick to the standard and only use MS extensions when I could find no other ways. That said, please address how to handle this in XSLT 1.0 if MS' extensions are not the answer: http://www.mikeschinkel.com/blog/gettingpastthexslterrorexpressionmustevalua tetoanodeset/ > > If you don't know them it's because you don't get out much. > > Google for JSON vs. XML and you'll find lots. > > Again,you're misdiagnosing. XSLT is not the issue here. Even > XML isn't really the issue. It's DOM. You're seeing symptoms, > but you're badly misdiagnosing the causes. Whatever. > > And here are two more XSLT-related blog posts I wrote from 2004: > > > > http://www.mikeschinkel.com/blog/rantingaboutxsltsverbosity/ > > http://www.mikeschinkel.com/blog/goodxslttutorial/ > > Hmm, the second one seems to include quite a bit of > non-standard Microsft XSLT and imperative thinking, and you > thought that was a good tutorial? I can see why you had problems. You can't write without being condescending, can you? > > And run this query: http://www.google.com/search?q=xslt+schinkel > > You'll find more than enough g*d d*mn evidence that YES I have used > > XSLT. Thank you very much. > > I scanned a few of those. Looks like to me like you had some > of the classic problems of an imperative programmer trying to > migrate to a functional language. > > I don't doubt you had problems, but they simply didn't arise > for the reasons you think they did. Again, whatever. I notice you didn't actually address any of those "classic problems", nor did you apologize for accusing me of never having used XSLT. Anything that is counter-intuitive to wide swaths of the populations and that requires a condescending "expert" to annoint "the proper way of doing things" is not an appropriate candidate for general purpose use on the web. Take your religion elsewhere. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
Mike Schinkel wrote: > Well I guess I was just unable to "get" the "right" way to do XSLT. And as > I tend to be a lot more technically advanced than the average person, even > more reason to argue against XSLT for mainstream use. > That you were technically advanced may well have contributed to your difficulties. XSLT was designed to be easier to use for web developers and other non-traditional programmers. I've mostly taught XSLT to programmers myself, but my friends who've taught it to non-programmers as well report that the non-programmers actually "get it" faster than the programmers do. Non-programmers perhaps don't have some of the same preconceptions we programmers do. Lord knows I had to unlearn enough things when I was first learning XSLT. It really is a different sort of language. You may well have been happier with XQuery than XSLT, though practically that wasn't an option until the last year or two. XQuery was explicitly designed to be more familiar and comfortable than XSLT to programmers experienced in traditional imperative languages like Java and C. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
On Dec 30, 2007 6:29 AM, Mike Schinkel <mikeschinkel@...> wrote: > Yeah well maybe I didn't. But they neither could anyone else on the XLST > list give me any alternatives, and that included Michael Kay. You *did* get lots of answers, and a lot of them pointed to your lack of understanding of what <apply-templates /> actually does and how the declarative nature of XSLT in fact favours it, or the automatic templates and why they're there. You couldn't understand why <value-of /> didn't do what <apply-templates /> do, and this was pointed out. I also seem to recall that Michael Kay indeed gave you advice and pointed to the same thing that Elliotte Rusty Harold points to now ; If you want imperative programming, XSLT is not for you. Michael also pointed out that perhaps XSLT 2.0 might be a better option for you because it can nanny you better. > Well I guess I was just unable to "get" the "right" way to do XSLT. And as > I tend to be a lot more technically advanced than the average person, even > more reason to argue against XSLT for mainstream use. With XSLT comes a paradigm shift for most people who are expecting imperative programming. Sure, you *can* do it that way, and (unfortunately?) XSLT is flexible enough to allow you to do so, but the universal principle of GIGO applies here just as anywhere else. Just like people struggle with RDF and Topic Maps for the first while to "get it" (or any ontology work, really), XSLT too has a "get it" threshold (functional, declarative, node-tree based). Some never get it. Most people who complain about XSLT don't get what XSLT is or how it works the best. I used to be one myself, not "getting it" and complaining. But then one day I got it (and really, it was a direct cause from getting XML on a deeper level), and now I can't go back to dealing with any markup the DOM/SAX/API way ; they're all so inelegant to me. It's ok not to "get it." There certainly isn't an automatic membership to a cool club if you do. In fact, it does the opposite ; it makes you a member of that club of people who sometimes have to tell people that don't get it that they, indeed, don't get it. :) And that that's ok. So should we not promote XSLT for "normal programmers"? Sometimes this is true, sometimes it is not. We need to choose the right tool for the right job, and sometimes choosing a tool that's right for the programmer is also true. And then, learning something new ain't bad either. It all depends. > As I've said repeatedly, I did my best to stick to the standard and only > use MS extensions when I could find no other ways. It's not always about the standards themselves ; more often it's about grokking a certain concept. > That said, please address how to handle this in XSLT 1.0 if MS' extensions > are not the answer: > > http://www.mikeschinkel.com/blog/gettingpastthexslterrorexpressionmustevaluatetoanodeset/ Handle what? There's nothing here to handle except your assumption that doing so is a good thing, and there's a *lot* of design issues in saying that. XSLT 1.0 either lets you create a variable with a node-set (through @select) or a string (through anything else), and this was done for very good reasons. Failing to accept the ground rules from which XSLT 1.0 came, use XSLT 2.0 which has some sligthly different design concepts. Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
* Elliotte Rusty Harold <elharo@...> [2007-12-29 19:40]: > Mike Schinkel wrote: > > I wrote vanilla XSLT 1.0 and only used MS extensions where > > XSLT 1.0 was woefully inadequate. > > That you used MS extensions suggest you really didn't get it. > That you were working with the MS XSLT in the first place > suggests you had problems you have improperly diagnosed, and > you're assigning the blame to the wrong cause. You’re being rather too harsh on Mike there. I *like* XSLT, but the Result Tree Fragment concept is utterly braindead. It’s naught but premature optimisation by committee; the WG thought that requiring implementors to return full node sets from templates would be an undue burden. The upshot is that nearly every single XSLT 1.0 processor has an extension function to convert RTFs to node sets – such as `msxsl:node-set`. Looks like the committee guessed wrong. Indeed, XSLT 2.0 does away with the RTF brain damage. (I almost never write XSLT without reaching for EXSLT. Some things are impossible without `exsl:node-set`, and trying to mung text content with bare XSLT is an exercise in pain and suffering. Also, processing Atom without EXSLT’s datetime functions is lots of unfun.) However I can’t exactly second Mike’s experience (as you’d guess by the fact that I like XSLT). Admittedly, I haven’t had particularly large XSLT codebases to maintain. The largest one is a set of transforms based on a library of templates shared between them, running up to about 2,000 lines or so all told. As I said, not huge – but still substantial. I have found myself amazed time and again at how quickly I could get back into the code, sometimes almost a year after the last time I touched it, and add a moderately complex feature after just minutes of reorientation. I feel confident that I could maintain an XSLT codebase 5× the size without any trouble. Maybe I am exceptional, as Mike would apparently argue; I dunno. I certainly don’t feel like a genius most of the time, though. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Can we take a 48-hour breather or something here, please? Because it's really looking to me like we're into Ford vs. Chevy territory again. And we've gone all REST-irrelevant, as far as I can tell.
> Berkeley dbXML I have bad memories of this tool. Too raw and I don't like the fact it's an embedded DB. We, me and the company I work for, decided to go for PostgreSQL 8.3 which embeds XML data type in columns instead. After using Oracle's BDBXML I'm not really all cheered up with this kind of DBs. Are the others better? BTW I'd like to give CouchDB a try for my experiments one of these days. Damien Katz decided to drop XML altogether <http://damienkatz.net/2007/09/system_overload_1.html> It seems he doesn't really like XML at all. But since that DB is document oriented and doesn't care of what you put inside you can still use XML as document format -- Lawrence, stacktrace.it - oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
* Lawrence Oluyede <l.oluyede@...> [2007-12-30 01:00]: > Are the others better? I have no experience with it yet, but I’m itching to take MonetDB for a spin, based purely on what I’ve read about it. Regards, -- Aristotle Pagaltzis // <http://plasmasturm.org/>
Mike Schinkel wrote: > > Yes, but when did I say we are dealing with nails? > You said XSLT was a hammer, I'm suggesting that your application is a nail. > > Pigeonholing it as a system for managing HTML documents is doing it and its > community a huge disservice. > It's a Content Management System, and like all CMSs it generates output as (X)HTML documents, does it not? I fail to see how pointing out the obvious is a disservice to anyone. > >> So, what is the point of using a SQL >> database for this task, as opposed to say, an XML database >> which *is* designed to handle marked-up documents? > > Well, uh, because Drupal uses MySQL to store its content? > As my example URL shows, any legacy SQL-based CMS may be wrapped by an XML layer. Just because Drupal uses MySQL doesn't mean you have to generate output by making SQL queries your starting point. Relational databases have their place, but they are definitely not efficient at serving Web content. > > If *I* insist? Why not learn something about the open-source project before > assigning me the responsibility of having selected the backend. (Even so, I > would not these days ever choose an XML database...) > Yes, "if you insist," which was an assumption on my part about your POV. An accurate one at that, so I fail to see where you take offense, particularly to the extent where you derail a cordial conversation by introducing insults and ad-hominem shots at myself and others. I am indeed familiar with Drupal. And Typo3. And Joomla. And WordPress. And etc. etc. etc. Plus, I wrote a CMS from scratch in 1998 using Server-Side JS. So why not keep on open mind when it comes to learning from the experience of others, instead of assuming they have none? > > In that case, why not go over to http://drupal.org and start a discussion > thread there telling them how everyone in the comminity has been getting it > wrong all this time and that everyone using Drupal should immediately stop > what they are doing and port to an XML database? > > Sorry, your assertion was just so over-the-top I couldn't resist the > sarcasm. > My assertion is that mature CMS applications are *legacy* applications, particularly from a REST standpoint -- but I certainly won't dive headlong into the viper pit of any CMS's online community of the faithful by suggesting these apps be redesigned following the tenets of REST any more than I would go to the Typo3 forum and claim that XSLT is a better solution that its proprietary templating system. Just because I'd be shouted down, with plenty of ad-hominem attacks thrown in for good measure, wouldn't make me wrong would it? > > True XSLT is a standard, but just because it made it's way to a standard > doesn't mean it was a good idea. Group-think is, has always been, and will > always be alive in standard's processes. > This argument is non-sequitir as it applies to all standards, including things like how the prongs are arranged on power outlets. But it isn't a valid reason to adopt your own power outlet design that only works with your devices, even if it does provide standard household current. > > Using XSLT requires one to opt-in to a XSLT library, and I know from > experience the pain of experience the differences there are between those. > How so? XSLT is built-in to the browsers it's served to, as is JavaScript. The difference is, to make JavaScript perform the same transformations XSLT is capable of, requires some sort of code library specific to your app. XSLT does not, its only purpose is to transform XML. > > Listen, the reason I'm arguing strongly for JSON and against XML is because > of the cargo-cultist mentality regarding XML that I seem to have unearthed > here. I'm not really anti-XML (though I am anti XSLT and anti- for overdoing > namespaces, especially as namespaces identified by non-deferenceable URIs), > I'm just seeing that is many times easier to work with JSON in the browser. > The fact you have to defend XML tells me though doth protest too much. > Aha, now I understand why you're being rude. You lack any shred of respect for those who make different technology decisions than you do, as there is obviously something deficient about our collective intelligence. Has it ever occurred to you, that your POV could be dismissed in a similar fashion? Like, "The fact that you have to defend JSON tells me...blah blah blather." It's a strawman. Is it really necessary to call me a "cargo cultist" because XML technology allows me to accomplish my work faster and easier than a pure JavaScript/JSON approach would? You aren't familiar with my project, so who are you to second-guess my decisions in such a dismissive fashion? This behavior does not support your position. Quite the contrary. :-( > >> With this JSON/AJAX approach, the fallback for browsers with >> no JavaScript would be what, PHP? > > Sure. But it could also be done in Python, Ruby, Perl, Lisp, Java, C#, > VB.NET, VBScript... > > As you say, it only needs to be coded once (per language.) > No, I said it only needs to be coded *once* in *one language*. Are any browsers compatible with Python, Ruby, Perl, etc. that you mention? No. But, XSLT works client-side and server-side, so you can indeed use the same code on browsers that is used on the server. > >> So that's PHP and >> JavaScript being used to accomplish the transformation task > > And the problem is? > The problem is, if you're accomplishing the same task by writing code in one language for the server, and another language for the client, then you are going against the principle of generality by not re-using your code. XSLT allows your server-side code to be re-used on the client. > > How is it twice the overhead? > Is that not obvious? If you have code that accomplishes a specific task, but must write it a second time in a second language, then you have twice the maintenance over the long haul as you would by having only one piece of code, in one language, accomplishing the same task. > > Besides, groking XSLT takes an order of magnitude more mental effort for > most people. > Yes, if they're stuck on imperative programming. For example, the quality of XSLT code on my project is very poor at the moment, because I employed two coders for six months who just couldn't grasp functional programming. So I fired them and took on a new partner to re-code the bulk of the project, but he had to learn XSLT first. This only took him one week, as he's solidly grounded in functional programming. > > How is it more likely to be understood? JSON has fewer non-data syntax > characters than XML that would otherwise confuse the uninitiated. > Well, for one thing, a browser is likely to recognize Atom and ask if the user wants to subscribe to the feed. This just won't happen presenting the same data as JSON. If a JSON document pops up that presents information in a proprietary fashion, I, the user, won't have the slightest clue how to decipher it without looking at the JavaScript code. OTOH, if an Atom document pops up that presents information in a standardized fashion, I, the user, have a pretty good chance of understanding who posted what when, without needing to go any further, due to the standard MIME type being used. Your argument may hold true for some use cases, but my point remains, that the use case of managing Web content is best done with Atom because that's what it is designed for. It's the difference between a proprietary interface using random JSON code, and a standardized interface using a known Content-Type -- Atom, as opposed to random XML code, which would be the same problem as I have with using JSON for this purpose even if JSON becomes standardized. > > My guess is someone will eventually create an Atom-to-JSON convertor, and > then someone else will create an Atom equivalent in JSON and who knows where > that might take us. > And any application built this way would constitute non-RESTful use of library APIs, unless some sort of "Atom-as-JSON" MIME type is created. > > NOW I see why you are arguing to strenously for XML & XSLT. If you accepted > that I was right about JSON you'd have to accept that all your efforts on > our XML+XSLT project were for naught. Knowing human nature's need to > justify its decisions, I really probably should not be arguing this with you > because even if I'm right your efforts on XSLT would make it hard for you to > admit it. > Oh good grief, again with the insults. How tedious, especially coming from someone who complains about being condescended to... My choice of technology is based on my knowledge and experience plus a tremendous amount of research. If I truly believed that your approach was a superior solution, then that's what I would have used. But I don't believe that, nor do I need to "justify my decisions" by arguing for an approach you claim I "know" to be deficient. How condescending a response is *that*?!? > > Frankly, I'd be horrified if that were every to take off on a broad scale. > Fortunately, I'm pretty sure it won't; at least not the XSLT part; XSLT is > just too hard to learn for the average Joe. > Is it really necessary to denigrate the work of others when you participate in a discussion thread? Your statement is merely an ad-hominem insult as you fail to back it up by explaining *why* you would be "horrified" if my project succeeds. What "average joe" needs to understand XSLT to use WordPress under my system? None. On to your next insult... > > Honestly speaking, aside from being a magnificent example of doing it > because it can be done, what tangible benefits does this really provide that > were not already available on vanilla WordPress? > Seriously, WTF? Why can't you phrase this, "What tangible benefits..." instead of starting off by insulting my work? In answer to your question, "REST". Are you claiming that vanilla WordPress is a RESTful application? If you'd care to review Roy's thesis again, please note how it talks about using a RESTful approach to legacy applications by encapsulating them in a layered system. I've created a REST wrapper layer for legacy, SQL-bound CMS applications, not just WordPress, which has the added benefit of serving as an integration point between CMS applications. It's interesting how you can be preemptively dismissive of someone's work at the drop of a hat like that, without even waiting for an answer to your question. Is REST a tangible benefit for non-RESTful systems? Yes, I believe it is, but that comes through my knowledge and experience, not received wisdom, which you repeatedly suggest is what drives me. It is not a "religion," it is a *solution.* Just like XSLT. > > Are you really seriously suggesting to build a full-working web app with > XSLT and foresaking all JavaScript? Aside from not seeing how it will work, > the thought of having to code again in XSLT gives me the cold chills... > Did you really seriously not even bother to "view source" on my examples? You might notice an interesting use of AJAX to enhance the cacheability of system output, as I am using both AJAX and XSLT. My point remains, that AJAX is very clumsy for doing transformations compared to XSLT, my point was never "don't use JavaScript, use XSLT instead", you are putting (or attempting to put) words in my mouth. In, once again, an insulting fashion. Try it again, without the "really seriously" and "cold chills" B.S., OK? > > But hey, if you are wedded to XML+XLST and you have invested tons of time > into a project using them then more power to you; don't let my distaste for > the tools you used get you down. Seriously; I'm just one person trying to > get something done. You don't have to convert me to your religion just as I > don't need to convert you to mine. > Trust me, being insulted by others who have not established that they know what the hell they're talking about is not the sort of thing that gets me down. I am merely presenting a working example of a RESTful system using client-side XSLT to offload some of the heavy lifting from server to client, transferred as Atom. Given your attitude, I guess I really don't give a damn what you think, either. So I'll be quite satisfied if people can read my posts, examine my example, and learn from it even if *your* mind is closed to the possibility that anyone who advocates XML might not be a moron. None of my work comes from "religious conviction" about any technology, only what works best for me after careful consideration of ALL the alternatives. Your anti-XML position comes across as FUD, and FUD is not the product of rational consideration of the alternatives -- as evidenced by your favoring of ad- hominem attacks rather than a rational explanation of your position. Good day. -Eric
> >Are the others better? > That's a tough question to answer. Not all XML DB's are created equal. We are using BDBXML, where an embedded DB is called for, and eXist elsewhere. But, these two applications have completely different approaches to the same concept -- storing XML natively. If we were to try using BDBXML where we are now using eXist, and vice-versa, we'd be in a world of sh*t. However, we could outright replace BDBXML with MarkLogic if we had to, as they take the same approach. There is no "best XML DB," only the XML DB that's best for your situation. ;-) -Eric
Karen wrote: > Because it's really looking to me like we're into Ford vs. > Chevy territory again. And we've gone all REST-irrelevant, > as far as I can tell. Yes, thanks for stepping in, and sorry. I came here to discuss RESTful design issues and got accosted by the XML+XSLTists preaching complete salvation, but only for those who truly believe. But it was my fault; I took the bait. Honestly, I'll really just want to discuss the REST-related issues and leave the unrelated religions at home. I'll try to wrap it up now below. Aristotle Pagaltzis wrote: > You're being rather too harsh on Mike there. I *like* XSLT, > but the Result Tree Fragment concept is utterly braindead. > It's naught but premature optimisation by committee; the WG > thought that requiring implementors to return full node sets > from templates would be an undue burden. The upshot is that > nearly every single XSLT 1.0 processor has an extension > function to convert RTFs to node sets - such as > `msxsl:node-set`. Looks like the committee guessed wrong. > Indeed, XSLT 2.0 does away with the RTF brain damage. Thank you for agreeing that XSLT's Result Tree Fragment concept is utterly braindead. I really do appreciate your coming to my defense here. It is quite annoying to have people tell me that I don't have relevent experience when they have no idea, and then when I prove my experience they change tactics and attempt to discredit my experience by saying I "just don't get it" rather than propose tangible alternatives to the very problems that resulted in my criticism. So thank you. > (I almost never write XSLT without reaching for EXSLT. Some > things are impossible without `exsl:node-set`, and trying to > mung text content with bare XSLT is an exercise in pain and > suffering. Also, processing Atom without EXSLT's datetime > functions is lots of unfun.) I discovered EXSLT toward the end of my XSLT ordeal, but it was unfortunately too little, too late at the time. If I were forced to use XSLT again, I'd certainly start there. Alexander Johannesen wrote: > You *did* get lots of answers, and a lot of them pointed to > your lack of understanding of what <apply-templates />... Then we'll just have to say I don't "get it." But I'm quite content with that now, as long as I don't have to deal with the XML+XSLT evangelism, especially on a list devoted to another topic: REST. But I know that my simple requesting it won't stop the aggressive prosthelitizing of the true believers. So tell you what Alex, Rusty, Aristotle, Eric, et. al.; rather than dishing out admonishions about my apparent lack of cognition and presume I would be a true believer too if I only "got it", and rather than not discussing actual solutions to the problems I previously posed on XSLT-related lists, why not take a crack at my prior use-case and prove that I really didn't get it after all and that XML+XSLT truly is the savior in our midst? The goal of the challenge will be to take the XML source for both [1], [2], [3], [4], [5], and [6] and create an XSL Transform that reproduces what was published with full fidelity and produce it in a manner from any XML document matching the same input schema. And yes, the HTML for those documents was generated with an XSL transform so it is possible (note the transform only produced the inner-content, not any of the ads for menus or whatever.) If anyone accepts this challenge I'll go back and make sure I have everything needed to make it work (xml source, xsd schema, xslt app, and anything else needed) and I'll package it up, and post it on my blog for all to download (I'd do it proactively but it will take some time so why make the effort if nobody takes the challenge?) Then we can all have a 'naked conversation' about it and take it OFF [rest-discuss] and you can prove the 'obvious' benefits of XSLT, as well as your XSLT prowess and my obvious lack there-of. And who knows? Maybe you'll convert me and I'll become an XML+XSLT true believer too. Stranger things have happened. But I won't hold my breath. BTW, don't think I'm trying to get you to do my work for me; hardly. The latest timestamp on the project files is Feb 8th 2006, the code is not currently in use, I had/have no current plans to resurrect it, and wouldn't use XSLT again if and when I do. So, LET US TAKE THIS XML+XSLT DEBATE OFFLIST AND TO OUR BLOGS, if need be, and clear the virtual airwaves so we can resume mutually-agreeable discussions of RESTful solutions and RESTian best practices. If you want to take up the challenge you can email me directly at mikeschinkel@... and I'll go about preparing the files and blog post. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org [1] http://www.howtoselectguides.com/dotnet/ormapping/ [2] http://www.howtoselectguides.com/dotnet/ormapping/tables/ [3] http://www.howtoselectguides.com/dotnet/pdf/ [4] http://www.howtoselectguides.com/dotnet/pdf/tables/ [5] http://www.howtoselectguides.com/dotnet/charting/ [6] http://www.howtoselectguides.com/dotnet/charting/tables
Lawrence Oluyede wrote: > BTW I'd like to give CouchDB a try for my experiments one of > these days. Damien Katz decided to drop XML altogether > <http://damienkatz.net/2007/09/system_overload_1.html> Yes CouchDB looks very cool, someone pointed it out to me the other day, and it appears very RESTful; bravo! Too bad it uses Erlang, and I only say that because of the difficulty for it to reach wide-scale deployment at hosting companies in any near term horizon. > It seems he doesn't really like XML at all. But since that DB > is document oriented and doesn't care of what you put inside > you can still use XML as document format heh! +1 for what he said! :-) -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org
On 12/29/07, Mike Schinkel <mikeschinkel@...> wrote: > Yes, thanks for stepping in, and sorry. I came here to discuss RESTful > design issues and got accosted by the XML+XSLTists preaching complete > salvation, but only for those who truly believe. But it was my fault; I > took the bait. Honestly, I'll really just want to discuss the REST-related > issues and leave the unrelated religions at home. I'll try to wrap it up now > below. That was... wow. That was *exactly* the opposite of what I was hoping for.
> That's a tough question to answer. Not all XML DB's are created equal. > We are using BDBXML, where an embedded DB is called for, and eXist > elsewhere. But, these two applications have completely different > approaches to the same concept -- storing XML natively. If we were to > try using BDBXML where we are now using eXist, and vice-versa, we'd be > in a world of sh*t. However, we could outright replace BDBXML with > MarkLogic if we had to, as they take the same approach. There is no > "best XML DB," only the XML DB that's best for your situation. ;-) This seems like a good pragmatic response :-) By the way, I think we were just using BDBXML where an embedded DB was not the right way to go. I'll look into eXist. -- Lawrence, stacktrace.it - oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
On Dec 30, 2007 2:12 PM, Karen <karen.cravens@...> wrote: > That was *exactly* the opposite of what I was hoping for. What were you hoping for? Alex -- --------------------------------------------------------------------------- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps ------------------------------------------ http://shelter.nu/blog/ --------
Lawrence Oluyede wrote: > BTW I'd like to give CouchDB a try for my experiments one of these > days. Damien Katz decided to drop XML altogether > <http://damienkatz.net/2007/09/system_overload_1.html> > > It seems he doesn't really like XML at all. But since that DB is > document oriented and doesn't care of what you put inside you can > still use XML as document format Very interesting. However I'm surprised at his opposition to XML. In what seems to be the CouchDB use case the primary purpose is to store web pages in a database. This is exactly what XML is for, and exactly what a native XML database and XQuery are designed to do. That he had some problems using XML as an API substitute does not surprise me, but why has he then thrown the web page baby out with the RPC bathwater? Curious. The real question, I suppose, is whether CouchDB supports any subdocument level of granularity. Can you actually search inside documents, or can you emrely insert and retrieve them by key? What if I want to cut documents apart and put them back together again? Is this possible? If not, we're simply back in the MySQL world where all pages are unstructured BLOBs. I spent some time looking through the FAQ and documentation but I couldn't figure out how to do this. Or perhaps I'm just totally misunderstanding the use case? However, without understanding the structure of the documents CouchDB contains, it doesn't strike me as a modern tool for web site development compared to options like eXist and Mark Logic. -- Elliotte Rusty Harold elharo@... Java I/O 2nd Edition Just Published! http://www.cafeaulait.org/books/javaio2/ http://www.amazon.com/exec/obidos/ISBN=0596527500/ref=nosim/cafeaulaitA/
[ Attachment content not displayed ]
> Very interesting. However I'm surprised at his opposition to XML. In > what seems to be the CouchDB use case the primary purpose is to store > web pages in a database. I'd say documents, every kind of document > This is exactly what XML is for, and exactly > what a native XML database and XQuery are designed to do. That he had > some problems using XML as an API substitute does not surprise me, but > why has he then thrown the web page baby out with the RPC bathwater? > Curious. He seems quite skilled and competent so behind the harsh post there must be something bothered him. For my little experience with Erlang, and taking a wild guess, I'd say JSON is better handled in Erlang than XML and since he embeds a JavaScript engine the difficulties handling JSON are basically non existent. BTW he supported XML as exchange format until 0.7 so there has be to a technical reason (maybe Erlang's support for XML?) to justify the change. I should dive into the mailing list and find out but I don't have time right now > The real question, I suppose, is whether CouchDB supports any > subdocument level of granularity. Can you actually search inside > documents, or can you emrely insert and retrieve them by key? What if I > want to cut documents apart and put them back together again? Is this > possible? If not, we're simply back in the MySQL world where all pages > are unstructured BLOBs. I spent some time looking through the FAQ and > documentation but I couldn't figure out how to do this. couchdb is declared "alpha software" so there's plenty of stuff ahead. I don't have answers for your question because I have zero experience with this tool. I'm reading (in this exact moment) some introductory material online. > Or perhaps I'm just totally misunderstanding the use case? However, > without understanding the structure of the documents CouchDB contains, > it doesn't strike me as a modern tool for web site development compared > to options like eXist and Mark Logic. CouchDB has for free the features that make Erlang shine, I wouldn't dismiss it so easily. Anyway, we're just speculating and we are deeply off topic as Karen trying to say in the other thread ;-) -- Lawrence, stacktrace.it - oluyede.org - neropercaso.it "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair
On 12/30/07, Alexander Johannesen <alexander.johannesen@...> wrote: > What were you hoping for? Good question, and I kind of left that open. Not "You're right... but let me get my licks in first!" though (or at least not from that source). So I guess *I'll* take a breather, though I imagine it'll be more than 48 hours.
Elliotte Rusty Harold wrote:
> Lawrence Oluyede wrote:
> Very interesting. However I'm surprised at his opposition to XML. In
> what seems to be the CouchDB use case the primary purpose is to store
> web pages in a database. This is exactly what XML is for, and exactly
> what a native XML database and XQuery are designed to do. That he had
> some problems using XML as an API substitute does not surprise me, but
> why has he then thrown the web page baby out with the RPC bathwater?
> Curious.
I not familiar with the reasoning behind Damien's decision, however
since documents are heavily accessed on the CouchDB server using views
(see more below) written in JavaScript, this might be part of it.
Also, CouchDB will use replication on field level between distributed
database server, again, I imagining might be easier using JSON documents.
>
> The real question, I suppose, is whether CouchDB supports any
> subdocument level of granularity. Can you actually search inside
> documents, or can you emrely insert and retrieve them by key? What if I
> want to cut documents apart and put them back together again? Is this
> possible? If not, we're simply back in the MySQL world where all pages
> are unstructured BLOBs. I spent some time looking through the FAQ and
> documentation but I couldn't figure out how to do this.
Yes, this is one of the major features of CouchDB. It uses what is
called views (compare to a very flexible RDBMS view plus an index) which
are calculated using a JavaScript map/reduce algorithm. During a view
update, each document will be passed to a JavaScript function, e.g.:
function(doc) {
map(doc.articleId, {title : doc.title, content: doc.content});
}
This function will create a view indexed on articleId with documents
containing only the title and content of articles.
/niklas
-------------------------
http://protocol7.com
Mike, Now you confused me. It sounds like you do want content-negotiation?!? Here is how I understand it from my reading the HTTP spec: For sending data (i.e. requests), it's common to support multiple the content-types for the PUT/POST body (GET never has a body; DELETE rarely has a body). The client/user-agent simply sends its request with whatever content-type, and the server lets the client know if it was able to digest it. If it wasn't able to digest it, the server responds with 400. For receiving data (i.e. response), it's possible to support multiple response types via 2 mechanisms: 1) server-driven negotiation: the client/user-agent send information to the server about its preference in representations (i.e. Accept* headers); either the server can respond with one of the acceptable representations or it responds with 406 2) agent-driven negotiation: the server responds with 300 and includes a list of candidate resources for the user-agent to pick; note there is no pre-determined content-type/format for doing so; it could be a newline separated list of URIs or it could a HTML page with a bulleted list of hyperlinks; somehow the client/user-agent picks the desired URI and re-issues the request to obtain the desired representation. #2 is an interesting beast when doing a PUT/POST (and this might well be the question you were asking). For the choice of possible URIs to make sense, they must correlate to the PUT/POST. So where the resources corresponding to these URIS created by the PUT/POST for the sole purpose to hold the response representation? If so how long are they available? Clearly, this builds up state on the server that will make it less scalable. Alternatively, by responding with 300 and providing a choice of URIs, the server is instructing the client/ user-agent to resubmit the request on one of these. That can be unfortunate when the request body was large. There doesn't appear to be a recommended way to discover the proper URI without doing the actual request. - Steve -------------- Steve G. Bjorg http://wiki.mindtouch.com http://wiki.opengarden.org On Dec 14, 2007, at 7:22 AM, mike amundsen wrote: > Yes, my primary question is about how common it is to support > multiple content-types for POST/PUT (I see lots of clear examples > of this for GET). > > I can see how this can be done using Content-Type in the POST/PUT > and having the server behave accordingly (return 415 if Content- > Type is not supported for this method). > > My secondary question is, as Steve clarified, about how this is > commonly communicated. Fielding refers to "reactive (agent-driven) > negotiation" in section6.3.2.7 of his dissertation: > > "...when a user agent requests a negotiated resource, the server > responds iwth a list of the available representations. The user > agent can then choose which one is best accordsing to its own > capabilities and purpose." (pg. 126) > > Sounds cool. but i gets a bit fuzzy after that... > > "The information about the available representation may be supplied > via a separate representation (e.g. a 300 response), inside the > response data (e.g. conditional HTML), or as a supplement to the > "most likely" response." (pg. 126) > > Since the dissertation is from 2000, and there has been some > progress regarding WSDL/WADL, SMEX-D, etc. I am asking for feedback > on how folks are currently handling the 'discovery' part of this > puzzle. > > Mike A > > > > On 12/14/07, Steve Bjorg <steveb@...> wrote: > Julian, > > I believe Mike was asking about supporting multiple content-types > for a request, not negotiating the response content-type. > > - Steve > > -------------- > Steve G. Bjorg > http://wiki.mindtouch.com > http://wiki.opengarden.org > > > On Dec 14, 2007, at 2:46 AM, Julian Reschke wrote: > >> mike amundsen wrote: >> > >> > >> > I am working on the details of supporting multiple media types >> for a >> > resource. To this point, I have concentrated on supporting the >> Accept >> > header as a way to allow clients to inform the server on what media >> > type to use for the representation on a GET request. This all seems >> > fine. >> > >> > Now I am wondering how important (or common) it is to provide >> multiple >> > media type support for POST and PUT. I would assume Content-Type >> would >> > be used by the client to communicate this info. I would also assume >> > that servers could respond with Status 415 if the Content-Type >> was not >> > supported for the POST or PUT. >> > >> > Any guidance or pointers to references on this topic are >> appreciated. >> > >> > Mike A >> >> I would strongly discourage to use content-negotiation for authoring. >> >> Let the server return a Content-Location upon GET/HEAD; and use >> that URI >> for modifying the resource. >> >> BR, Julian >> > > > > > -- > mca > "In a time of universal deceit, telling the truth becomes a > revolutionary act. " (George Orwell) > >
Steve: thanks for the reply. I *am* talking about server-driven content-negotiation in this thread. I my question was meant to cover two things: 1. how common is it to support media-types for resource A on PUT/POST(via content-type header) that are different than the media-types for resource A on GET (via accept header)? 2. and how is this information commonly communicated to the client? I gleaned from the several helpful responses that (1) is common. form-encoding being given as the trivial example, but there are lots of other cases where the PUT/POST media-types might be a much more limited set than the GET media-types (i.e. your can PUT/POST a resource using form-encoding or atom; you can GET that same resource in atom, rss, vcard, etc.). I did not get a complete answer (that I saw) for #2. WSDL/WADL and friends exist as does the service document for APP. some mentioned them. One person mentioned crafting a custom response for the OPTION method. MikeA http://www.amundsen.com/blog/ On Dec 30, 2007 11:34 AM, Steve Bjorg <steveb@...> wrote: > > Mike, > > Now you confused me. It sounds like you do want content-negotiation?!? > > Here is how I understand it from my reading the HTTP spec: > > For sending data (i.e. requests), it's common to support multiple the > content-types for the PUT/POST body (GET never has a body; DELETE rarely has > a body). The client/user-agent simply sends its request with whatever > content-type, and the server lets the client know if it was able to digest > it. If it wasn't able to digest it, the server responds with 400. > > For receiving data (i.e. response), it's possible to support multiple > response types via 2 mechanisms: > 1) server-driven negotiation: the client/user-agent send information to the > server about its preference in representations (i.e. Accept* headers); > either the server can respond with one of the acceptable representations or > it responds with 406 > > 2) agent-driven negotiation: the server responds with 300 and includes a > list of candidate resources for the user-agent to pick; note there is no > pre-determined content-type/format for doing so; it could be a newline > separated list of URIs or it could a HTML page with a bulleted list of > hyperlinks; somehow the client/user-agent picks the desired URI and > re-issues the request to obtain the desired representation. > > #2 is an interesting beast when doing a PUT/POST (and this might well be the > question you were asking). For the choice of possible URIs to make sense, > they must correlate to the PUT/POST. So where the resources corresponding > to these URIS created by the PUT/POST for the sole purpose to hold the > response representation? If so how long are they available? Clearly, this > builds up state on the server that will make it less scalable. > Alternatively, by responding with 300 and providing a choice of URIs, the > server is instructing the client/user-agent to resubmit the request on one > of these. That can be unfortunate when the request body was large. There > doesn't appear to be a recommended way to discover the proper URI without > doing the actual request. > > > > - Steve > > -------------- > Steve G. Bjorg > http://wiki.mindtouch.com > http://wiki.opengarden.org > > > On Dec 14, 2007, at 7:22 AM, mike amundsen wrote: > > > > > Yes, my primary question is about how common it is to support multiple > content-types for POST/PUT (I see lots of clear examples of this for GET). > > I can see how this can be done using Content-Type in the POST/PUT and having > the server behave accordingly (return 415 if Content-Type is not supported > for this method). > > My secondary question is, as Steve clarified, about how this is commonly > communicated. Fielding refers to "reactive (agent-driven) negotiation" in > section6.3.2.7 of his dissertation: > > > > "...when a user agent requests a negotiated resource, the server responds > iwth a list of the available representations. The user agent can then choose > which one is best accordsing to its own capabilities and purpose." (pg. 126) > > Sounds cool. but i gets a bit fuzzy after that... > > "The information about the available representation may be supplied via a > separate representation (e.g. a 300 response), inside the response data > (e.g. conditional HTML), or as a supplement to the "most likely" response." > (pg. 126) > > Since the dissertation is from 2000, and there has been some progress > regarding WSDL/WADL, SMEX-D, etc. I am asking for feedback on how folks are > currently handling the 'discovery' part of this puzzle. > > Mike A > > > > > On 12/14/07, Steve Bjorg <steveb@...> wrote: > > > > > > Julian, > > > > > > I believe Mike was asking about supporting multiple content-types for a > request, not negotiating the response content-type. > > > > > > - Steve > > > > > > -------------- > > Steve G. Bjorg > > http://wiki.mindtouch.com > > http://wiki.opengarden.org > > > > > > > > > > On Dec 14, 2007, at 2:46 AM, Julian Reschke wrote: > > > > > > > > > > > > > > mike amundsen wrote: > > > > > > > > > I am working on the details of supporting multiple media types for a > > > resource. To this point, I have concentrated on supporting the Accept > > > header as a way to allow clients to inform the server on what media > > > type to use for the representation on a GET request. This all seems > > > fine. > > > > > > Now I am wondering how important (or common) it is to provide multiple > > > media type support for POST and PUT. I would assume Content-Type would > > > be used by the client to communicate this info. I would also assume > > > that servers could respond with Status 415 if the Content-Type was not > > > supported for the POST or PUT. > > > > > > Any guidance or pointers to references on this topic are appreciated. > > > > > > Mike A > > > > I would strongly discourage to use content-negotiation for authoring. > > > > Let the server return a Content-Location upon GET/HEAD; and use that URI > > for modifying the resource. > > > > BR, Julian > > > > > > > > -- > > mca > "In a time of universal deceit, telling the truth becomes a revolutionary > act. " (George Orwell) > > > -- mca "In a time of universal deceit, telling the truth becomes a revolutionary act. " (George Orwell)
* Karen <karen.cravens@...> [2007-12-30 16:50]:
> Good question, and I kind of left that open. Not "You're
> right... but let me get my licks in first!" though (or at least
> not from that source).
Going meta practically never does anything to refocus an online
discussion. At worst you get an entire new off-topic subthread
(which is just what happened here).
Usenet is like a herd of performing elephants with diarrhea –
massive, difficult to redirect, awe-inspiring, entertaining,
and a source of mind-boggling amounts of excrement when you
least expect it. — Gene Spafford
Cheers! :-)
Regards,
--
Aristotle Pagaltzis // <http://plasmasturm.org/>
I am few days short of releasing 2.0 version of RESTClient. I have put a development snapshot of the build here: http://wiztools.org/project/RESTClient/restclient-2.0-SNAPSHOT-jar-with-dependencies.jar (4.5 MB download) A sample request file is available: http://wiztools.org/project/RESTClient/request.xml The new features added: * Can save requests and responses * Support for groovy test scripts Requesting interested people to test the tool and give feedback, bug reports etc: http://code.google.com/p/rest-client/issues/list -- Regards, Subhash Chandran S http://rest-client.googlecode.com/
At Sat, 29 Dec 2007 08:55:22 -0500, "Mike Schinkel" <mikeschinkel@...> wrote: > > Berend de Boer wrote: > > And you know why crappy programmers don't detect they emit garbage? > > Because of all those other crappy programmers who, believing > > Postel's law, happily accept garbage and try to make sense of it. > > With that idealistic view of what IMO is one of the most important > principles to ensure robustness on the Internet, I guess we don't have > anything to discuss. Have you ever looked at the diagram for a TCP header? Here it is: <http://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure> You have to get every *bit* right! And yet the Internet keeps working. I have to say that experience has led me to largely agree with Berend here. I don’t think the success of SMTP, etc. has to do with liberalness in what they expect; it has to do with being first, reasonably simply, and second, human readable. best, Erik Hetzner ;; Erik Hetzner, California Digital Library ;; gnupg key id: 1024D/01DB07E3
Erik Hetzner write: > > Berend de Boer wrote: > > > And you know why crappy programmers don't detect they > emit garbage? > > > Because of all those other crappy programmers who, believing > > > Postel's law, happily accept garbage and try to make sense of it. > > > > With that idealistic view of what IMO is one of the most important > > principles to ensure robustness on the Internet, I guess we > don't have > > anything to discuss. > > Have you ever looked at the diagram for a TCP header? Here it is: > > <http://en.wikipedia.org/wiki/Transmission_Control_Protocol#TC > P_segment_structure> > > You have to get every *bit* right! And yet the Internet keeps working. > > I have to say that experience has led me to largely agree > with Berend here. I don't think the success of SMTP, etc. has > to do with liberalness in what they expect; it has to do with > being first, reasonably simply, and second, human readable. This has become a continuing debate I have with the more advanced people on mailing lists, especially some of those in the HTMLWG. The Internet is a global, participatory ecosystem and both does and should empower as many people as possible. There are many layers on the Internet, and there are many use-cases. The developers who program TCP are some of the more advanced programmers in the professional programming areana. To bring up TCP as an example is akin to saying that someone must possess the skill of a heart surgeon to bandage their child's skinned knee! Clearly our collective health would be much worse if we had to queue up and wait to see the handfull of heart surgeons when all we need are anti-biotics. And it would financially devastate many people as they pay the market rate for those highly skilled people. Actually that's what computing was like back in the days that mainframes and mini-computers were all that existed; well heeled companies were the only ones with computing, and everyone else did without. Yes, at levels such as TCP and device drivers and O/S kernels, etc. one must get it right. The lower the layer the more quality and skill is required but that is why we have many layers and many people making ever higher level layers and making them as broad as possible; to empower the greatest number of people can participate. Not everyone is or even can be a highly-skilled programmer, nor does everyone want to be, nor should be expect them to be. But *everyone* should be able to participate on the Internet and with as much functionality as we can make available to them. Not just the highly advanced developer who are typically paid in the top 1 percentile of incomes, but everyone. Most people have real work to get done and real lives to lead that don't require or even allow them the time to become advanced at development. Setting the bar to be that one must be and/or hire an advanced developer in order to publish a web page and we'd have no Web as we know it today. All of those social and economic benefits just wouldn't exists. All those productivity gains just would not have happened. The more you can empower those with the least of skill, the more of these benefits and gains will be realized. So it is incumbent upon programmers of skill to empower those with less skill to be able to accomplish as much as possible. You who work for a university paid for by the public trust with a mission that includes public service [1] should know this better than most. It isn't and shouldn't just be about making it easy for the elitists of development, the Eriks and the Berends, it is and should be about empowering as many people as possible, and that means being liberal is what you accept and conservative if what you provide. Even if it makes your job harder; doing so will make empower that many more people and is the right thing to do. -- -Mike Schinkel http://www.mikeschinkel.com/blogs/ http://www.welldesignedurls.org http://atlanta-web.org P.S. As an ironic and interesting follow-up, here's a semi-related article [2] I just found that discusses the "curse of knowledge" which I'd say is what too many of us on the mailing lists have, and we assume too many other's have the knowledge they don't, too. "It's why engineers design products ultimately useful only to other engineers. It's why managers have trouble convincing the rank and file to adopt new processes. And it's why the advertising world struggles to convey commercial messages to consumers. I HAVE a DVD remote control with 52 buttons on it, and every one of them is there because some engineer along the line knew how to use that button and believed I would want to use it, too," Mr. Heath says. "People who design products are experts cursed by their knowledge, and they can't imagine what it's like to be as ignorant as the rest of us." To innovate, Mr. Heath says, you have to bring together people with a variety of skills. If those people can't communicate clearly with one another, innovation gets bogged down in the abstract language of specialization and expertise. "It's kind of like the ugly American tourist trying to get across an idea in another country by speaking English slowly and more loudly," he says. "You've got to find the common connections." When experts have to slow down and go back to basics to bring an outsider up to speed, Cynthia Barton Rabe says, "it forces them to look at their world differently and, as a result, they come up with new solutions to old problems." Ms. Rabe herself experienced similar problems while working as a transient "zero-gravity thinker" at Intel. "I would ask my very, very basic questions," she said, noting that it frustrated some of the people who didn't know her. Once they got past that point, however, "it always turned out that we could come up with some terrific ideas," she said. Look for people with renaissance-thinker tendencies, who've done work in a related area but not in your specific field," she says. "Make it possible for someone who doesn't report directly to that area to come in and say the emperor has no clothes." In a way, I'm trying to be that person who says the emperor has no clothes. [1] http://www.ucop.edu/ucal/aboutuc/mission.html [2] http://www.nytimes.com/2007/12/30/business/30know.html